Take note of these two words: Artificial Intelligence. They will not hear about anything else with more emphasis on the immediate future than that. In that future, great changes are plagued with opportunities and unknowns. No one knows what the scope and speed of the changes will be. But, the process has begun strongly. It will be necessary to see if it is a comparable revolution, in its capacity for transformation, but the truth is that artificial intelligence is already at the center of our future.
We must begin by fully assuming that the increasingly widespread dependence on existing and future technologies do not have a return ticket. The good news is that advances in Artificial Intelligence (AI) will boost the use of this new natural resource that many call Big Data and that few have been able to transform it into value. In fact, the concept of big data, begins to disappear from the conversations which lead to AI. Managing big data requires core skills in data science to extract the value and define the key performance indicators to define the ROI for each company. It not only requires data science skills to build advanced data APIs, it also requires special AI skills to build neural networks and program it in Python, TensorFlow, or PyTorch with an infrastructure of Jupyter Notebooks and Google Colaboratory notebooks. Unlike the IT field, there is no functional or technical consultant in the data science field. Data scientist takes the undertaking of both business domain and technical domain to define the processes, applying the data science skills with mathematics, computer science, statistics, and programming.
The consumers of today have already assumed and assimilated that life unfolds in harmony with new technologies and environments increasingly being introduced. Aware that life is "spied on" and "tracked" at every step, the expectation of consumers is that companies that have all this data anticipate their needs and deliver instant and personalized answers to each query. Let the data make people's lives better. In other words, a fairer exchange: data in exchange for added value.
The truth is that the AI will not be limited to a specific area. From commerce to the Treasury, from the media to the new customized TV, from cities to tourism, there will be a day when customers-citizens-taxpayers-tourist etc. will talk with interactive chatbots that can recommend products, restaurants, hotels, services, shows, according to your search history, comments, purchases and recommend activities, offer special discounts or handle service issues such as transportation or specific needs.
According to a study called Artificial Intelligence and Life in 2030, there are going to be a lot of changes in the next decade. The research trends for AI are changing rapidly, with companies focusing more on machine learning, deep learning, computer vision, robots, and natural language processing. In the transportation industry, cars are expected to get qiute smarter with time, and self-driving vehicles will be the norm.
Apart from that, home automation and service robots will become considerably more advanced as the world realizes that robots could make for fantastic companions and servants at home. In healthcare, analytics could be used in order to get better data about patients. For low-resource communities, machine learning approaches could be used in order to improve public safety and security. AI and machine learning will pen new avenues and job opportunities for people as more and more industries begin to embrace this technology.
Two of the issues that dominate the present of the AI are security and privacy. The more data one gives (renouncing the fictitious privacy, because we are delivering them sometimes knowing that we do it and many times without knowing it) the better the machine will be able to predict our needs. And since the AI is powered by data, the concept of privacy will be practically erased.
For many of us, tax data is critical and important to keep the data private so that it won’t fall into the wrong hands of society or in the hands of hackers on the dark web and deep web that can potentially ruin the careers of many individuals in the society by way of creating a false identification and abusing the identity. Social security numbers, middle names, dates of birth, and medical records are important to the corporate citizen. Dr. Ganapathi Pulipaka tweeted earlier in the month of April, 2019 about the tools on preserving the privacy with the Python library PySyft. PySyft introduces a framework to preserve the privacy on such medical records and tax data. Leveraging the power of PyTorch’s tensors and multi-party computation or privacy-preserving computation is a subfield of cryptography. This is accomplished through federated learning. The deep learning models are trained in PyTorch for counterattacking the reverse-engineering attacks that can retrieve all the sensitive data from the models regardless of the fact that these have been applied with privacy preserving deep learning algorithms. The framework enables the implementation of advanced techniques differential private methods, federated learning, and multiparty computation through an interface. The structure of the PyTorch sensor contains a parent and child hierarchy between the PyTorch tensor and Syft tensor. The adversary resides outside the system of participants e.g., Alice and Bob (as an eavesdropper on the sender and receiver), where the impact of sending a tensor on the local and remote chains leverages a standardized protocol for the exchange of communication between Alice and Bob to make the federated learning possible. A chain structure of the abstract SPDZ tensor model has been developed and deployed in PyTorch to override the operations to share the data between the sender and the receiver.
The chain structure is the lynchpin framework to perform abstract operations on the tensors. Any transformations that are performed by sending the tensors between the workers can be seen as a chain of operations within a special class with the abstraction SyftTensor. Each SyftTensor represents the transformation of the data or the current state of the data that can be chained together. The chain structure is designed with the head PyTorch tensor and SyftTensors as the child attributes. Debugging the chain of operations in PyTorch could become complex from the virtual context execution to real context execution. The framework simplifies the debugging by developing the notion of virtual workers. Since the virtual workers reside on the same machine and they do not communicate through a network, rather the same interface has been exposed to the actual workers. In case of federated learning, the network workers can have multiple implementations and share the commutations through network sockets and web sockets for the data science ecosystem for web-based Jupyter notebooks or Google Colaboratory notebooks.
Dr. Ganapathi Pulipaka is a Chief Data Scientist and machine learning researcher, and AI advisory board who specializes in studying Artificial Intelligence and believes that the future of humanity hinges on AI. He is currently working as the Chief Data Scientist and the SAP Technical Lead at one of the largest technology companies in the whole world Accenture and has worked with many major companies on collaborative AI labs and commercial projects.