Catch up on the latest AI articles

To The Machine's Acquisition Of Free Will.

To The Machine's Acquisition Of Free Will.

AutoML

3 main points
✔️ Automate your NAS and other machine learning pipelines
✔️ In this context, setting up the problem in the first place is the last area
✔️ Outline the basic structure, realization approach, and necessary constraints of problem learning (PAS)

Problem Learning: Towards the Free Will of Machines
written by Yongfeng Zhang
(Submitted on 1 Sep 2021)
Comments: Published on arxiv.

Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV); Information Retrieval (cs.IR); Machine Learning (cs.LG)

code:   

The images used in this article are from the paper, the introductory slides, or were created based on them.

first of all

This paper (white paper) is conceptual, providing guidelines for leading into the future, and does not contain any specific algorithms or implementations.

A machine intelligence pipeline usually consists of six components: problem, representation, model, loss, optimizer, and evaluation metric. Taking emotion or image classification as an example [1, 2, 3, 4 ], the problem is to classify sentences or images into labels of different emotion or image classes. To solve this problem, words, sentences, or images are converted into a representation such as a vector [5 ]. The representations are then processed by models such as LSTMs [6 ] or CNNs [7, 8 ] and fed into a loss function such as cross-entropy loss [9 ]. It characterizes the current representation and the quality of the model. Optimizers such as backpropagation [10 ] and stochastic gradient descent (SGD) [11 ] are then used to optimize the loss for the best parameters. Representations and models may be integrated into an integrated architecture [12 ], and representations can be designed manually (e.g., TF-IDF [13 ]) or trained automatically (e.g., word embedding [5]). Finally, the task can be evaluated using several (usually manually designed) evaluation metrics, such as accuracy, fit rate, recall rate, and F-measure [14].

There have been many attempts to automate components of the pipeline. For example, representation learning [15, 16, 17, 18 ] is concerned with automatically learning good features from data, model learning (neural architecture search or, broadly speaking, automatic machine learning) [19, 20, 21 ] is concerned with learning good model architectures, loss learning [22, 2324 ] is to learn a good loss function, optimization learning [25, 26, 27 ] is to automatically discover a suitable optimization algorithm, and evaluation learning [28, 29 ] is to a learning-based evaluation metric instead of a manually designed rule-based metric. discovery.

On the other hand, one key component of the pipeline, problem definition, is still largely unexamined from an automation perspective. In the current AI research paradigm, the significant effort by domain experts is usually required to identify, define, and formulate the key problems in a research or application domain, and the problems are usually classified, regressed, generated, predicted, ranking, etc. into one of the standard formats. Relatively complex problems are usually manually decomposed into several steps of relatively simple problems. For example, sentence generation may be represented as several steps of a word ranking problem with beam search [43 ], and link prediction in a knowledge graph is evaluated as a problem of ranking entities [30 ].

Automatically discovering research and application problems from data is beneficial because it helps to identify valid and potentially important problems hidden in data that are unknown to domain experts, expand the range of tasks that can be performed in an area, and inspire entirely new ones. This is particularly important for a variety of emerging AI domains. This is because, compared to traditional AI domains where research problems are usually formalized as "standard" tasks, emerging domains may present many more unknown problems, and these problems can be difficult to identify.

This white paper describes problem learning as learning to discover and define valid and ethical problems from data or machine interactions with the environment. We formalize problem learning as the identification of valid and ethical problems in a problem space and introduce several possible approaches to problem learning, including problem learning from failure, problem learning from exploration, problem composition, problem generalization, problem architecture search, and meta-problem learning. Furthermore, the definition of a problem is usually closely related to the evaluation of the problem, since we need to know how successful a potential solution to the problem is. As a result, we automate the problem definition and evaluation pipeline by learning to evaluate the problem as it is learned. In a broad sense, problem learning is an approach to free will

Another equally important caveat is ethical considerations in problem learning. Giving a machine free will to define a problem that it deems important does not mean that the AI should have the freedom to define and solve the problem, but rather that it is an ethical issue.

Hierarchical AI Architecture

From an abstract point of view, a modern AI system can be represented by a hierarchical architecture, as shown in Fig. 1a. In this paradigm, the researcher or practitioner first defines the problem of interest. For example, the problem might be to classify a set of images, sentences, or graphs into several class labels, to predict the future values of a time series, or to rank potential answers to a particular question. Once the problem is clearly defined, the researcher usually recognizes a way to evaluate potential solutions to the problem. Different measures are used to evaluate the classification, prediction, and ranking problems. Next, the researcher designs a loss function that reflects the nature of the problem as best as possible. Similarly, different loss functions are used for different types of problems. To model complex problems, several loss functions or a combination of several adjusted forms of loss functions may be used.

As shown in Fig. 1b, many components of the architecture have been automated. The long-term vision for AI is to automate all components of the pipeline so that machines can automatically identify problems (problem learning), automatically build solutions to problems (representation learning, model learning, loss learning, learning to optimize), and finally automatically evaluate solutions (learning to evaluate). (learning to evaluate). Problem learning is the last missing piece of the puzzle, allowing the machine to automatically identify problems that it considers important and worth solving, and is a key component towards the machine's free will. On the other hand, it's also a component of the pipeline that needs to be carefully considered in the context of ethical and responsible AI. This will ensure that free will machines are useful rather than hurtful.

expressive learning

Early intelligent systems mainly used manually designed representations. Representation learning [16] made it possible for machines to automatically learn features from data. Through end-to-end learning, a deep model is "trained" on specific data. This allows the underlying patterns in the training data to be detected and the most descriptive and salient features, usually in the form of representation vectors, are automatically extracted [ 61 ]. The extracted features may be organized into a hierarchy to represent different levels of abstraction [16, 18]. Representation learning significantly reduces the manual effort at the most basic layer of a hierarchical AI architecture (Fig. 1b).

model learning

Automated model design is the next step in automating the AI pipeline. Model parameters are learned, but model structures are designed manually, which requires significant human effort; Neural Architecture Search (NAS ) aims to let machines automatically learn the best model structure for a particular task. Automatically assembled model structures are likely to be comparable to or better than manually designed structures by experts for many tasks [19, 20, 21, 64 ].

learning by rote

The loss function determines how to penalize the model output and provides a signal for optimizing the model or representation parameters, usually through backpropagation. The domain expert's understanding of the problem and the goals of the system specification are usually reflected in the loss function. The loss function may be a combination of several losses of the form $L=\sum_i \lambda_i L_i$ for the joint consideration of multiple tasks. On the other hand, researchers have found that the performance of the same task can be very different if the losses are different. For example, recent work has found that contrast loss may be superior to cross-entropy loss for many tasks [ 56, 65 ]. It seems difficult to make a machine automatically learn the best loss function for a task. Loss learning aims to achieve this goal. By automatically constructing loss functions from basic operators and finding loss functions that are superior or even better than manually designed ones [22, 23, 24], loss learning reduces the human effort on loss design.

Learning to Optimize

The optimization algorithm is the key to learning the model and representation parameters. Algorithm design is a painstaking process, often requiring multiple iterations of conception and validation [26]. We attempt to have the machine automatically learn an appropriate optimization algorithm by using learned gradients based on LSTM [25] or reinforcement learning [26], which are more efficient, instead of designed gradients for parameter updates. It is more accurate and robust than many manually designed optimization algorithms.

assessed learning

Once a solution has been created for a particular problem, it is important to evaluate the solution to know its quality and ease of use. Many evaluation methods are manually designed rule-based metrics. Apart from commonly known metrics such as accuracy, the goodness of fit, reproducibility, F-measure, NDCG, MRR, etc. [14], researchers may also design metrics tailored to specific tasks. However, it is time-consuming to design evaluation methods manually, and it may be difficult to generalize the designed metrics to other tasks.

Evaluation learning aims to solve problems by letting machines automatically design evaluation protocols for tasks. For example, the Automatic Dialogue Evaluation Model (ADEM) learns automatic evaluation procedures for dialogue studies [28], and [29] proposed a learning-based discriminative evaluation metric that is trained directly to distinguish between human and machine-generated image captions. Furthermore, recent advances in causal machine learning have made it possible to learn to evaluate AI systems based on counterfactual reasoning [ 76 ]. Some simulation-based evaluation approaches can also be useful for evaluation learning. This builds a simulation platform for evaluating intelligent agents such as robotics [ 77 ] and recommender systems [ 78 ] in a simulation environment. Evaluation learning helps to reduce the manual effort in designing evaluation protocols and collecting evaluation samples.

problem-based learning

Problem learning is the last missing piece of the puzzle towards an automated AI pipeline. It aims to proactively discover and define the problem to be solved. Problem learning is unique in the sense that it is a key component of the free will of the machine. The other components of the pipeline focus primarily on how to solve a particular problem, but not on the problem to be solved since problems are still identified and defined by humans, especially domain experts. In contrast, problem learning facilitates the operation of an intelligent machine by giving the machine the ability and flexibility to determine the problem it wants to solve. This is a major step towards subjective awareness.

problem-based learning

To formally define problem learning, we present three stepwise concepts: solution (S), problem (P), and problem learning (PL).

solution (esp. to a problem)

For different AI tasks, the solution may present a very different form, but in the abstract, the solution can usually be represented as a mapping that projects a question to an answer.

To better understand the concept, we will use the emotion classification as an example. Here, the question set Q = {all statements under consideration} and the answer set A = {positive (+), neutral (0), negative (-)}. Different methods of emotion classification can be developed, and the final solution is a mapping from Q to A, assigning an emotion label to each sentence in Q.

An important caveat is that the use of mappings as the mathematical form of a solution implies an important assumption. That is, each element in question set Q maps to only one element in answer set A, according to the mathematical definition of mapping. However, some questions, from first impression, may require a one-to-many mapping solution that violates the mapping definition. For example, many searches or recommendation tasks require a ranking list as a solution. This is a set of (ordered) elements. As an example, consider a search engine, where Q is the set of all possible queries and A is the set of all possible documents. The search result for a query q ∈ Q is a subset of the documents {d} ⊆ A associated with the query q. However, mapping an element of Q to many elements of A is forbidden by the definition of mapping. One possible way to solve this problem is to use set-valued mappings. That is, we can define S as Q → 2A. where 2A is the should set of A. In this way, solution S is a mapping that maps queries.

Depending on the scenario, the solution can be presented as a mapping of different forms, such as a function S(f), an algorithm S(a), or a model S(m). Functions are the most convenient way to map a question to an answer, and this form is most widely used in science and engineering, such as physics and mechanical engineering. However, many solutions can be too complex to be represented as functions, especially in computer science, and especially in AI. To solve this problem, algorithms map questions to answers via a procedure that can be viewed as multiple steps in a function. This algorithmic type of mapping is most widely used in theoretical computer science and algorithmic research. In some other cases, a mapping function exists, but it is very difficult to find the exact analytical form of the function. In such cases, the Universal Approximation Theorem (UAT) [79, 80, 81 ] allows us to initialize the mapping function as a model architecture, such as a deep neural network, and to "learn" the parameters of the model based on observations or counterfactuals. Thus, the final learned model serves as a mapping from questions to answers. This model type of mapping is the most widely used in AI / ML research. Often, because many problems and their solutions are complex, the mapping is a combination of functions, algorithms, and models.

Another caveat is that the definition of the solution itself does not imply anything about the quality of the solution. A solution can be good, bad, or stupid, but they can all be solutions. The search for a good solution depends on the definition of "what is good" - how it is evaluated - but the definition of a solution does not include evaluation as part of it. Instead, it is the evaluation module that determines how good a solution is. The evaluation module is largely independent of the solution itself because it can evaluate the solution from whatever perspective is necessary to meet the needs. Furthermore, finding an appropriate solution to a particular problem is an important focus of existing AI methods through representations, models, loss, optimizers, etc. (Fig. 1).

problem

Depending on whether the potential answer set A is predefined or has not yet been detected, the problem can be defined in a deterministic or non-deterministic way.

In the context of problem learning, a deterministic problem is one in which the potential answers to the questions are known. In the case of deterministic problems, the question is either a general question whose potential answers are "yes" or "no", or a special question that admits many possible answers, but in either case, a set of potential answers is provided and the only problem is to find a suitable mapping between them. For non-deterministic questions, on the other hand, the answer set is unknown, and finding the answer set is part of the problem.

Taking image classification as an example, in a deterministic problem setting, the question set questions are general questions such as "Is this image a cat?", in which case the answer set A = {yes, no}. A question set can also consist of special questions, such as "What is the class of the image? In this case, A = {cat, dog, horse, ---. } etc. To qualify it as a deterministic question, the answer set must be provided. If the answer set is unknown, and finding the answer set is part of the problem, then it is a non-deterministic problem. It's not a strict analogy, but in the context of machine learning, a typical deterministic problem is supervised learning, such as classification, and a typical non-deterministic problem is unsupervised learning, such as clustering.

problem-based learning

Problem-based learning aims at proposing a problem rather than solving it, but the proposed problem must be valid and ethical. It can be viewed as a constrained learning problem where the constraints consist of validity and ethical requirements. Although problem learning does not directly solve the proposed problem, it can be closely related to solving the problem, since one of the key aspects of "validity" is whether the proposed problem is solvable at all. In many cases, however, it is not necessary to solve the problem to determine whether it is solvable. Instead, you can employ a variety of methods to test the solvability of a problem before you get serious about solving it.

A new problem is a synthesis of an existing problem, and humans can easily come up with such a problem. Some work on different but similar problems, such as image emotion classification, has already been done [82, 83, 84 ]. However, applying problem learning to non-trivial scenarios may reveal new problems beyond our imagination. For example, an agent may find new problems in human-machine interaction through the analysis of user behavior in a web or cyber-physical system. Collective anomaly detection through a combination of metabolic indicators could detect new health problems or phenomena worth studying. Or discover new signals that are worth predicting when interacting with the environment. Perhaps one of the most exciting application scenarios for problem-based learning is scientific discovery, which includes both natural science research, such as physics, chemistry, biology, drug discovery, and medical research, and social science research, such as economics, psychology, and sociology. In these fields, posing a new and meaningful problem, even if it does not solve the problem, can significantly change a researcher's perspective and stimulate new ideas and research directions. One example is problem learning from failure. If an agent finds that using an existing method does not predict something well, a new problem may arise that is worth investigating.

Problem-based learning is also important in terms of helping to fill gaps in the community. In today's academic world, the amount of human knowledge has increased exponentially compared to hundreds of years ago; therefore, it has been almost impossible for any researcher to possess knowledge in all areas. As a result, it is often the case that one community has made significant advances in the definition of a problem or the methods used to solve it, while another community adopts an old definition of the problem or solves the problem using an old method. Problem learning agents can help reduce the gap between communities by maintaining a global problem space that is aggregated from different disciplines. When posing a problem in one community, the agent leverages insights from problem definitions in other communities. With the help of a problem learning agent, a researcher in one community is not limited to the scope of the problem of technology in the home community or any other single community when trying to identify an important problem to be solved in the home community.

One useful discussion is the relationship between deterministic and non-deterministic problem learning. The main difference between them is whether the problem learning agent provides a candidate set of answers to the problem is discovered. This implies a possible trade-off between the difficulty of posing the problem and the difficulty of solving the problem (or asking the question and answering the question). While it may be easy to pose an (unanswered) question without providing a candidate answer, it becomes more difficult to solve such a question because the search for an answer set becomes part of the problem-solving procedure. On the other hand, posing a problem and providing answers to candidates in the meantime may be difficult, but solving the problem becomes much easier. As a naive example, if the answer set is provided as {Yes, No}, even a random guess policy has a 50% chance of answering the question correctly, but answering the question correctly does not necessarily mean that the solver understands the question. Overall, we present the following no-free-lunch predictions about problem learning.

What's a good problem?

Problem learning can be viewed as a constrained learning task. As discussed above, a good problem needs to be valid and ethical.

A valid question.

・Mathematical adequacy

Mathematical adequacy is primarily concerned with whether a problem is solvable. Typically, problem-learning agents expect to pose potentially solvable problems, because they expect the agents to have a practical impact. However, we recognize that many unsolvable problems are also important, especially from a theoretical perspective, because they may stimulate new insights and discoveries. Mathematical adequacy can be described in two ways: 1) from a model perspective, it can take into account the predictability of the target, and 2) from a problem perspective, it can take into account the solvability of the problem.

The ability to make correct predictions is one of the most typical types of intelligence pursued by humans. Many problems can be formulated as some kind of prediction problem, such as predicting the behavior of a human or a system, predicting the motion of an object, or predicting certain properties of a target item. Although many models have been developed to make the most accurate predictions possible, it should be noted that some targets may not be predictable due to theoretical limitations. Some problems may not be very sensitive to deviations in prediction results, while some others may be sensitive to deviations in prediction results. As a consequence, it is best if the problem learning agent can identify predictability [98, 99] through predictability tests when posing prediction-type problems.

Social relevance

The validity of a problem must also be considered from a social perspective. A problem that is valid in one social situation may not be valid in another social situation. The effects of social situations may also differ in terms of time or space. For example, location tracking and prediction may be an invalid problem for a normal person living in a normal state due to privacy concerns, but for certain workers in a dangerous area or a dangerous state, location tracking and prediction may be a very important problem to protect their safety. As a consequence, a problem-learning agent should be able to pose valid problems depending on the social context in which the problem is proposed. A visionary agent may pose a problem that does not seem valid today but may become valid in the future, so it is advisable to consider it early.

ethical issue

Ethical considerations in problem learning are critical because we hope that free-willed machines can help humans rather than hurt them. This requires the AI to look for problems that are non-malicious, responsible, socially beneficial, and uphold human dignity. Ethical constraints in problem learning can be considered in several aspects such as transparency and accountability, fairness and justice, accountability and robustness, privacy and security.

Transparency and accountability

An ideal problem-learning agent can explain why a particular problem arises and why it is important or worth solving. This can be very helpful for humans to understand the behavior of the agent and build trust with the intelligent machine [74, 100 ]. Over the past few years, several explainable AI methods have been developed, including model-specific explanations such as linear regression [62 ], decision trees [101, 102 ], and explicit factorization [75 ], as well as model-independent explanations such as counterfactuals. Most methods, however, describe explainable AI in terms of the model rather than in terms of the problem. That is, most focus on explaining how the decision model works, but do not explain why the problem is important or worth solving. As a consequence, these methods may explain the internal mechanisms of the problem generation process, but may not explain why we should care about the problem. In the context of problem learning, the latter is even more important. This is because a good explanation of why a problem is important helps humans to understand the insights of the problem posed, and to better decide which problems need to be dealt with, and which ones.

Fairness and justice.

Problem-learning agents need to be careful not to raise issues that might discriminate against or treat unfairly certain individuals or groups of people. In recent years, AI fairness has received a lot of attention from researchers. However, most of the current research on AI fairness has been done at the model or outcome level. That is, the focus is usually on the fairness of the machine learning model or the fairness of the model decision results, while fairness at the problem level should take into account whether the definition of the problem is fair or not.

Accountability and Robustness

Problem learning allows a machine to define and solve problems that it considers important. However, problem-learning agents can be vulnerable to malicious attacks and manipulation. Intelligent machines that explore problems to be solved at will can be very dangerous to humans because they can be directed to create unethical or harmful problems if manipulated by dishonest individuals or entities. As a result, accountability and robustness of problem learning are very important, and problem learning agents should be able to handle unexpected errors, resist attacks, and produce consistent results.

Privacy and Security

Problem-based learning must also take into account the privacy of the user and guarantee the security of the protected information. The reason for this is that the description of the generated questions may reveal personal information about a particular individual or group. This is especially true in the case of problems related to the processing of user-generated data. As a result, when dealing with personal or sensitive information, problem-learning agents need to take care to avoid data leakage, protect user privacy, and pose problems securely and responsibly.

Possible approaches to a problem learning

Approaches to problem learning can be categorized into two main types. Differentiable Problem Learning (∂PL) and Non-Differentiable (Discrete) Problem Learning ( ∂PL ). Differentiable problem learning creates new problems through learning in a continuous space, whereas non-differentiable problem learning creates new problems through discrete problem analysis or optimization.

Problem Learning from Failure

In the current AI research paradigm, researchers usually define a problem and then develop different models that attempt to solve the problem. If the existing models do not solve the problem well, researchers usually tend to assume that the existing models are inadequate, and therefore put more effort into designing better models. However, if an existing model does not successfully solve a problem, it may not be because the model is inadequate, but because the problem itself is not clearly defined, i.e., the problem is not being asked in the right way. As a consequence, it is perhaps more important to think about how to define the problem in a better way, rather than spending effort on designing a better model. Indeed, the ability to iteratively refine problems and methods is a fundamental skill in many research disciplines, especially natural science research, and this skill needs to be acquired by intelligent machines. The problem of learning from failure aims at this goal. If an agent is unable to generate satisfactory results, such as prediction accuracy for a particular problem based on an existing model, it is very exciting if the agent can suggest possible changes to the problem definition. With this new definition, the existing model would work very well for the new problem.

Problem learning from exploration

Problem learning from failure builds on an existing anchor problem to create a new one. An alternative approach to problem learning is to learn from exploration. This approach does not rely on a specific anchor problem. Instead, agents aim to discover valid and ethical problems through active exploration, such as exploring a data set or interacting with the environment. One example is the active discovery of predictable or nearly predictable signals in data that was previously ignored. Investigating the predictability of single-function signals is a starting point, but it may seem trivial since many single-function signals have already been investigated manually by domain experts. However, investigating the predictability of a particular combination of singles is not straightforward. Each signal may not be predictable individually, but when aggregated in a particular way, the combined signal becomes predictable.

Problem-based learning by the composition

Problem composition is intended to build larger, more ambitious problems that consist of smaller, well-defined sequences of known problems. Each small problem in the sequence, when solved, provides some useful information that allows the solution of the next problem. The organization of a problem is not as simple as putting several small problems together, but it does require careful consideration of the relationships among the small problems and how they affect each other. A good problem learning agent will be able to identify the relationships between problems and connect them in the right way to arrive at a valid problem. Problem composition can be seen as the reverse process of planning. In planning, a target problem is given, and the agent needs to decompose the problem into several smaller, easily solvable problems to reach the target. In problem composition, on the other hand, the agent is provided with several small solvable problems and aims to propose a valid larger problem.

Problem Learning by Generation

Many known problems can be generalized to new problems. Problem generalization starts with a known problem and then generalizes the problem to a new problem by investigating alternatives to the subject, predicate, or object in the problem description. For example, if an agent knows that consumer purchase prediction is a valid problem in e-commerce, it might generalize to the problem of consumer returns or consumer complaint prediction and conduct predictability tests to determine the validity of the problem. Problem learning by generalization is associated with problem learning from failure, and when a new problem is generated by generalization, it may or may not be adequately solved by a carefully designed model. If the problem is not successfully solved, the problem learned from failure can be used to further refine the problem definition.

problem-solving

The above approach assumes that the problem is described in plain language. However, problems can be represented using mathematical structures such as graphs. For example, various concepts such as "human", "face", "classification", "prediction", "cat", "dog", "consumer", etc. can be used as potential entities in a graph, with edge connections between them. Typically, such entities are taken from question set Q. Once the problem can be represented as a graph, a Problem Architecture Search (PAS) algorithm can be developed to search for valid and ethical questions. This is similar to neural architecture search (NAS), which searches for the best model structure; PAS can be implemented based on reinforcement learning, and reward signals can be provided by checking the validity and ethics of the generated candidate problems. To make problem retrieval more controllable, the set of concepts used for problem retrieval may be controlled to allow the agent to generate problems within a target set of concepts.

Problem retrieval may also be performed beyond the conceptual level. For example, as mentioned earlier, problem learning by composition can be implemented through problem retrieval. Specifically, each unit problem can be viewed as an entity in the graph, and PAS can be employed to search for valid and ethical configurations of unit problems to construct larger and more ambitious problems. This can be viewed as a modular architectural search procedure.

metaproblematic learning

Although the range of problems is infinite and each problem is defined differently, there are likely to be some similarities and common structures that are shared by many problems. Meta-problem learning helps students to learn about common structures such as "meta-problems". Meta-problems can be used to induce new problems.

Meta-problem learning is beneficial from several perspectives. First, it helps to extract similarities from seemingly different problems to allow for collaborative learning effects to discover superior problem structures. Under a supervised problem learning setting, i.e., when a set of known valid and ethical problems is provided as a teacher, such collaborative learning effects can help agents learn potential definitions of "validity" and "ethics" from the teacher and encode them. It ensures that the particular problem posed by the meta-problem can easily satisfy the requirements of validity and ethics. Second, it helps to enable cross-domain problem learning or problem transfer learning by learning meta-problems from a variety of different domains and generating specific problems from the learned meta-problems. Finally, learning domain-independent meta-problem structures can improve the efficiency of problem architecture search by fine-tuning the problem search from meta-problems rather than starting from scratch.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!
友安 昌幸 (Masayuki Tomoyasu) avatar
JDLA G certificate 2020#2, E certificate2021#1 Japan Society of Data Scientists, DS Certificate Japan Society for Innovation Fusion, DX Certification Expert Amiko Consulting LLC, CEO

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us