We used an algorithm called GPT – 2 which uses computing units called transformers as building blocks and has more than a billion parameters. This makes it incredibly potent and the generated model is able to develop a general understanding of the language that can be used for various language related tasks such as text classification, entity extraction, question answers and as we have demonstrated below Natural Language Generation (NLG). We used a pre-trained model that is publicly available for this purpose. The idea is to demonstrate the potential of this class of algorithms that can be used for various language related tasks.
We provided the following sentence as input to this algorithm.
“The traditional problems (or goals) of AI research include reasoning,
knowledge representation, planning, learning, natural language processing,
perception and the ability to move and manipulate objects.”
Depending on some configurations, the algorithm can produce different results for the same input (technically, it samples from different choices for each word in the sentence based on probabilities of words in that position generated by the algorithm).
The first output we got was as follows.
OUTPUT – 01
“The industry applications of AI are many, ranging from compliance to translation, all the way to emergency and disaster response.
One of the potential application areas where AI could make significant changes to health care is “patient-centered care”, where decision-making is based on the interests of the patient, not the demands of health providers or stakeholders. Traditional diagnostic methods are implemented for cancer detection, but progress in cancer treatment is often hampered by diagnostic inaccuracies that occur due to extensive misinterpreting, and inefficiencies that lead to inappropriate treatment or lack of treatment. Recent evidence suggests that personalized and predictive diagnostic systems may be an important solution to these issues.
The AI itself is, however, inherently unpredictable and sometimes useless, when applied to clinical care. Fortunately, the doctor’s task is often to provide reasonable advice in the context of limited resources. By helping the doctor and patient interpret the evidence in a manner which produces results that align with the best available evidence, AI could provide advice and guidance that would be invaluable to both patient and doctor.
Clinical AI applications already exist. Many existing diagnostic technologies rely on inference from image and motion analysis or clinical biomarkers. For example, studies have shown that BRS technology and other statistical intelligence tools are very useful in predicting stroke severity and prognosis. Other applications include cancer biomarker detection and prognostic analysis.
Although extremely impressive, it’s hard to say that AI has supplanted human beings in many clinical scenarios, for example, the diagnosis and treatment of patients with high morbidity. It’s easy to imagine that that is what may happen in the future, and AI would be immensely helpful in developing ways to distinguish between clinical cases and anomaly. Nevertheless, many practitioners believe the end of the next 50 years is still early enough that AI will not make significant advances in clinical care.”
It was interesting to see that this algorithm provided an output that starts with a listing of various potential uses of AI but moves on to elaborate on the area of Clinical research. The algorithm maintains the context over the entire passage and also retains grammatical correctness for a significant part of the output. All in all, the article seems reasonably cohesive.
Now let’s look at the second output of the algorithm where it starts with a different initial state due to a seed based on a probability distribution. This change in initial state alters the entire outcome.
OUTPUT – 02
“Formal studies of Artificial Intelligence typically begin with the algorithm or problem-solving procedure in a computer program (or sometimes more generically, a set of rules for a computer program) that enables it to perform some task. These rules are referred to as “agent rules” and define what an agent should or should not do. Agents may be autonomous or semi-autonomous. Agents that are capable of significant physical control in the real world, and then capable of deciding what actions to perform, are called autonomous agents. An agent is semi-autonomous when it does something “by itself” in its own right, rather than as a byproduct of the actions of others. Self-evident examples of self-evident agents are cameras and cars. Agents such as general-purpose computers that take actions based on common sense, and software applications that autonomously analyze textual input and make decisions based on the meaning of the input, are classified as semi-autonomous. A conceptually simpler agent such as a mouse is said to have essentially no autonomous decisions and so is considered fully autonomous.
Research in artificial intelligence is usually divided into three main phases. The first phase, which began with Alan Turing’s landmark paper “Computing Machinery and Intelligence,” has generally been termed machine learning and it has been applied to a wide range of research problems, ranging from speech recognition, to financial analysis, to drug discovery. The second phase, which began with John McCarthy’s seminal papers on adversarial networks (subsequently generalized), takes the form of machine learning for a set of problems known as self-driving cars, and applications to deep-learning systems that perform digital image processing, medical diagnosis, and atmospheric modeling.
The third phase, which began with Mark Kaliski’s work on decision trees and stochastic nets,takes the form of training and learning through reinforcement learning, a form of artificial intelligence which strives to find an optimal answer to some problem by continually adding to the training data. This phase is commonly referred to as machine learning for decisionmaking and has proved useful in areas such as online retail, customer-service robots, auctions, and human-computer interaction.”
This output is completely different from the earlier one and speaks about autonomous agents and the evolution of AI through various phases. There are some very interesting errors – “An agent is semi-autonomous when it does something “by itself” in its own right, rather than as a byproduct of the actions of others”, and some bloopers “A conceptually simpler agent such as a mouse is said to have essentially no autonomous decisions and so is considered fully autonomous.” And the reference to Mark Kaliski is a complete blinder.
You would not find any material to validate his contribution to reinforcement learning that could be significant enough to warrant mention here. However, without any research on the subject one can be very easily tricked into believing this! This also plays into the current open debate on how do you explain AI produced results and how do you verify and control the accuracy of the same. In the future it may be fairly easy to publish such AI produced content and a reader would not know the difference between this and human produced content.
While we agree that AI generated blogs are not exactly perfect, but it is a start. At this rate, the day doesn’t seem far when AI might take over the content creation process and the need for human inputs will be reduced to a minimum.