Season 4 - Episode 412
Technologies like artificial intelligence, deep learning and machine impact our lives in countless ways, but how much do you know them? On this Baylor Connections, Pablo Rivas, assistant professor of computer science at Baylor, gives listeners a class on “Deep Learning 101.” The author of the book Deep Learning for Beginners, Rivas shares how it shapes services used by consumers, and analyzes ethical issues to be considered in that growth.
Derek Smith:Hello and welcome to Baylor Connections, conversation series with the people shaping our future. Each week, we go in depth with Baylor leaders, professors and more, talking about important topics in higher education, research, student life and today, deep learning. We'll be talking with Dr. Pablo Rivas. Dr. Rivas serves as assistant professor of computer science at Baylor, an expert in artificial intelligence, deep learning and machine learning. He's the author of the recently released book, Deep Learning for Beginners, which guides readers through the processes that he teaches undergraduate students and analyzes ethical situations stemming from these technologies' capabilities. He served as a postdoctoral researcher at Baylor from 2012 to 2015 and spent time in Baylor's Truett Seminary before returning to the Baylor faculty last year. How do things like deep learning, artificial intelligence impact our day to day life? We will talk about that on the program within today, Dr. Pablo Rivas. Dr. Rivas, thanks so much for joining us. It's great to virtually meet you and to talk about this today.
Pablo Rivas:Thank you, Derek. It's great to be here.
Derek Smith:Just before we dive in, let's set the table to think where people... I think some of those topics, I mentioned, artificial intelligence, deep learning, we're hearing those more and more, but maybe our understandings are nebulous or maybe incomplete. So we'll dive into that but first I'm curious, where are some places that most all of us as consumers would interact with your field of study even if we don't know it, even if we're not even thinking about it?
Pablo Rivas:Well, the easiest way of finding AI and machine learning today, it's in our smartphones and most of our TVs, most of our computers now will have some type of AI technology behind it. And the smartphones things like our browsers. Every time that you type something on your browser to search for it, is usually backed by a machine learning engine. Also, if you have any of those devices that you can talk to and virtual assistants all based on the AI. Even when you're watching TV, that lineup, those shows that you see is most likely generated by a machine learning or a recommendation system that is telling you, or maximizing profits and making sure that people are watching the shows at the time that they should.
Derek Smith:That makes sense. That makes sense. Let's give some definitions. As I mentioned, you wrote the book Deep Learning for Beginners, which came out last year. And so most of us are beginners to these topics to say the least, myself included. So I was wondering if you could give us some definitions and maybe even tell us how they interact with one another. I want to ask you about deep learning, machine learning, artificial intelligence and then fuzzy systems is another area that you researched. Let's start with deep learning. What is deep learning?
Pablo Rivas:Well, deep learning is part of artificial intelligence and this is actually a subsection of machine learning itself.
Derek Smith:Okay. Actually, you don't have to go in that order. If there's a better umbrella way for you to describe those that I just asked you, I will just cede the floor to you in that for sure.
Pablo Rivas:Yes. We can start with artificial intelligence and then go from top bottom. And artificial intelligence is a definition for it that most people can connect with would be a system or a technology, a methodology, which can analyze a situation or factors and decide or make a decision or provide some type of output beyond random chance. If you think about predicting something, what demonstrates intelligence to some extent is that it operates beyond random chance. That it's better than just flipping a coin. And the most intelligent systems will be those that actually perform closer to human performance or even better. Machine learning in particular which is in my area, involves learning from data. Did the parts from data in order to produce some type of output that you desire? It's all about the data. I have a motto for one of my classes here at Baylor. It says, "In God we trust, everyone else, show me the data." Because it is this idea of using what we know to make connections, find rules and make an informed decision based on that data. Now, deep learning comes from this area of machine learning in which the models that machine learning produces are usually based on parameters and deep learning usually has been defined recently as this set of over parametric models. That is that it has a ton of parameters, bigger models, larger models than we used to have 10, 20 years ago. And so deep learning is now this area or these computers that we have today are enabling us to build this bigger, larger systems that we call deep learning.
Derek Smith:What are some of these larger systems used for? What are some uses that we might find in higher ed or elsewhere?
Pablo Rivas:One of the larger systems that we can find today is in translation, whether it's an article, machine translation, and these are systems that take text from any language and you translate to another, and they find these deep connections between not only one language to another, but one to many. And so these are pretty large systems and most of the system base that also use computer vision, like for recognizing objects with using your smartphone, that also has a behind pretty large system.
Derek Smith:We are visiting with Dr. Pablo Rivas assistant professor of computer science at Baylor. And you mentioned that these technologies are advancing greatly as you compare them to years back decades back, how rapidly? I think for most of us, it seems like it's fast. The things are growing, expanding, but from you being in the discipline, how should we think about how rapidly these technologies are advancing?
Pablo Rivas:It is pretty fast growing on-air out, but there are some areas that grow rapidly than others even in AI. And one of the areas is ethics really in AI has grown exponentially. And I think it's a consequence of AI machine learning, being more accessible to the general population or that we have now more people that is qualified to understand what's going on. And as we continue to teach people how to do machine learning and deep learning, there is a grown interest also in doing things. But there are consequences of that who have led to an increased interest in how we do it correctly. So as to not hurt people or maintaining a fair response to harm and to keep people accountable and to make sure that there's transparency about the things that AI does.
Derek Smith:Well, let's talk about that because I think that's an area that people notice. I mean, it could be as simple as if I buy a book about baseball on Amazon, I'm going to get more recommendations for books about baseball. Or if I Google a hotel in Dallas, I'm going to get more ads about maybe visiting Dallas. Now there's other things you hear people say that they mentioned something around the Alexa or Hey Google, and they start getting ads. And that concerns people. So I'm curious, when you think about the power and now the accessibility of it, what are some of those ethical issues that arise? Because the potential to really get to understand people's behavior and even go all in on a profit motive for a company could be an understandable impulse.
Pablo Rivas:Right. I think you mentioned one of the problems is that in some cases, from the perspective of industry profit is it is the main goal, and that is not necessarily in accordance or that creates conflict with what the consumer wants. And there needs to be some trade off there between what is considered profitable and also something good for the consumer. And that's a very complicated topic, but it is certainly something that has motivated a lot of research recently in which there is a creation of standards. I'm part of a group called IEEE, a standards association where we're developing standards for AI ethics. And one of the main, or the core tenets about this recommended practices is to keep things under a fairness, accountability, and transparency. It is understandable that some companies may not want to release a particular model because of intellectual property, but at least what the consumer needs to know is what's the strained on data comprise of a diversity of people, is that someone thought about the consequences of this and how it's going to affect lives and who is going to be responsible if something goes wrong. So these ideas of transparency, accountability, and fairness can lead to very good technology.
Derek Smith:When you talk about fairness, as it relates to a consumer, and I'm curious also you being a seminary student, you have that... We were talking about ethics. There's a faith component that you have personally, and in your background, that feeds into that. But when you think about it broadly, when we talk about fairness for a consumer, what does that look like?
Pablo Rivas:Well, I can think of an example that could make it clear. If you go to open a bank account. If you go to the bank and this is your first bank account ever, as a young man or young woman, when you go, you have only your name, your address, social security number and zero credit history. This is going to be your first time. And so what happens there is that if there is a machine learning or AI behind an account recommendation system, there is nothing to go on, except for your address, for example. And that implies that they could approve or deny your credit solely based on your zip code, for example or gender even only because there was no credit history. And so a system that is not trained fairly, it's going to take something called a protected attributes or protected features to make decisions. If there's transparency about this and the public knows about this, the public has to say and say we shouldn't decide the financial future of a person based on gender or a zip code, if there's a particular zip code that doesn't pay back their debt or something. And so that is a concern. And so in that way, fairness should be to make sure that system operates for everybody. And another way of thinking about this is that we want AI or machine learning models that reflect the society that we want to be not necessarily the society that we are right now. Because right now we are a society that works for the majority of people, but doesn't necessarily for all. And so it will be interesting to create models or technology that works for all. And the thing is that it's very possible. That's the good news. It is possible. We can do it.
Derek Smith:You mentioned that you were part of a group that thinks about these ethical issues. And I know Baylor as it relates to on the side of the tech... You're about talk side of technology. I know we've talked to Stacy Petter before and information systems talking about ethical handling of data, for you, where do those things intersect, whether it's making technology accessible or equitable. But also we think about privacy and security with all the amount of data that deep learning and artificial intelligence enables companies to have about me or you or others. Do those areas intersect much for you and your work or that's something where you would tend to maybe collaborate if you're on a project like that with someone else?
Pablo Rivas:Yeah. From the group that I work on, it has about 14 standards. And I work particularly in three of them, one is algorithmic bias. That's what it's called. And it's all about creating a standard or practices that the industry can take and verify that they have thought about certain consequences of data and the bias that can be generated by a model or the data itself. Another one is empathic technology. That one is about technology that uses your emotions to make some recommendations or to determine your state of mind and offer you something. And so one of those can carry some issues and whether there's a psychological aspect or a social science aspect to it that it's important. So we work closely with lawyers and social scientists in that one too.
Derek Smith:Wow. Well, Dr. Rivas when you talk about things like empathy and psychology, what bridges the gap between those psychological concepts and the technology itself? How do those things come together? Hopefully good ways, but certainly in ways that people are utilizing?
Pablo Rivas:me way that there is a personality test or personality tests for us, if we can take, and that's our help us understand ourselves and the way we think and behave the data that we produce in social media or in our online behavior or other types of data can also be used to create a profile. And while creating a profile based on machine learning can be scary. It can be useful in detecting persons with mental health issues, or also helping people find what they need quicker, or people who think alike and have more diversity within groups. And so the connections there with psychology are very important for the group to be successful and so we want to listen to these people because here I am from a computer science perspective on machine learning, and I only know about algorithms and runtime, complexity, and memory, and all these things.
Pablo Rivas:And here they are thinking about the human mind which is amazing. So we have to pay close attention and to see where's the lines that we need to put in place where to provide a good standard and good practices.
Derek Smith:I'm curious if you have thoughts on for most of us outside of your discipline, what's a healthy way to think about this and approach this. Some people are indifferent, others are concerned, but realize its utility. We also probably know some people who'd seem downright fearful about their privacy or security, or what have you, when they talk about some of these things. So I'm curious just from your standpoint, what are some healthy ways to think about as consumers, as individuals, this exponential growth that's taking place and how we can hopefully make the best use of it in our own lives?
Pablo Rivas:Well, the thing with AI is it is a technology that changes very quickly. And so while I agree that for a number of years, many people did not think about the consequences of things that has caused some people to fear some of the advances now. And it's totally understandable. However, what the public needs to know is that there's people like us working to make sure that this technology is safe, that is secure, that is fair, and that the public can have access to know what is behind it. Obviously they cannot take the court, a learning algorithm or a machine learning model but we can understand now better how the algorithms make those decisions and we can keep people accountable.
Pablo Rivas:And so now there is a safety net for the consumer or the public to know that there is coming regulation. There's a lot of regulation already in the European Union and something similar is happening in America to look after the consumer, to look after the people who use the technology in the end. While at the same time, maintaining the companies happy and engaged in using technology. Because one study recently pointed out by the Baylor's Corporate Engagement Office. They send me a study that shows that a lot of AI or industry has lowered down the adoption of AI because of this lack of regulation. And they were scared of being sued by the consumer. But now with these standards in place or coming out, we feel like it's going to be safer for consumer and better for industry adoption as well. And Baylor, we want to continue training people in machine learning and AI, and we want to continue to train them also with this ethical standards and mindset. In fact, the book that I wrote has some of those short essays at the end of some chapters to motivate people, to think about the consequences of this technology.
Derek Smith:You mentioned the book Deep Learning for Beginners. You talk about you're educating students here at Baylor in the classroom to be excellent and ethical practitioners and this book is a resource. And who's your target audience for the book and how did that come together for you?
Pablo Rivas:This was a targeted for programmers who don't know a lot of machine learning but they want to make that bridge. They want to learn deep learning, but they know some programming already. Then they can use this book. And also for the people in the machine learning area who know already machinery but need to do things and program things. This is for those two groups for it is definitely not for someone who knows nothing about programming or nothing about machine learning. And there's an assumption that they're familiar with either programming or familiar with machine learning.
Derek Smith:You came back here to Baylor last fall, you'd been a postdoc here and a student at Truett, and then now you're teaching students again, what are you enjoying most about that? And as we close here, what's on your research agenda going forward, that you're excited about?
Pablo Rivas:Well, I'm excited about some connections I've been making with partners at University of Texas at El Paso. And here at Baylor, we want to start an ethics center and standards for AI, and we're preparing a proposal for a National Science Foundation and hoping that we get some funding to launch the center, provide industry with research and compliance, help with this things. And also on another avenue, we're exploring quantum machine learning, which is a new brand new area, and I'm very excited about having some a postdoc working with me, Dr. Javiera Duz who's a physicist. And also I'm working with another people collaborating to use NASA satellite data and get to classify 20 years of data into the spectral signature of declassified dust, which is a big problem if you're in Texas and in some other areas of the world as well.
Derek Smith:Wow. So there's really no shortage to the areas that you and then your students, whether now, or someday can collaborate in your area with other disciplines.
Pablo Rivas:Yeah. It's a pretty exciting time to be in AI and machine learning.
Derek Smith:Well, very exciting. Well, Dr. Rivas, I really appreciate your time. Thanks so much for sharing with us today and helping us all get a little bit better understanding, and certainly more than we walked in with when it comes to things like AI and deep learning. I really appreciate it.
Pablo Rivas:Thank you Derek.
Derek Smith:Thank you very much, Dr. Pablo Rivas assistant professor of computer science at Baylor, our guest today on Baylor Connections. I'm Derek Smith. A reminder, you can hear these types of programs online at baylor.edu/connections, and you can subscribe to the program on iTunes. Thanks so much for joining us here on Baylor Connections.