On the hottest AI winter, the author says again th

2022-08-08
  • Detail

The author of "Ai cold winter theory" speaks again: the AI industry is a circus editor's note: after half a year, piekniewski, the author of "Ai cold winter theory" speaks again. This time, he targeted openai and Tesla, thinking that what they did was no different from the clowns in the circus. In his view, we are in the stage of "counterattack on the verge of death". At this stage, those who are most likely to lose are showing the most outrageous confidence they can think of to deceive more naive people into giving them money. The original title of the article is, compiled by 36 krypton God Translation Bureau, hoping to inspire you

about a year ago, viral transmission was triggered. As I promised, I will publish articles regularly to update the views therein

, it's time to release new content. Although many things have happened in the past year, nothing can change my original idea: the artificial intelligence foam is bursting

just like the bursting of every foam, we are in the stage of "counterattack on the verge of death". At this stage, those who are most likely to lose are showing the most outrageous confidence they can think of to deceive more naive people into giving them money

now, let's review what happened

first, let's talk about some serious things

first, let's look at some serious things. Geoffrey Hinton, Joshua bengio and Yann Lecun, the three founders of deep learning, won the Turing Award, which is the most prestigious award in the field of computer science

if you think I will question this judgment in some way, you will be disappointed. I think deep learning is worth winning the Turing prize

in my personal opinion, J ü rgen schmidhuber is not among them, leaving a little bad impression. Although I think he sometimes makes people feel embarrassed, his contribution to the field of deep learning is undeniable

by the way, these guys are quite quiet, and Hinton finally has his own twitter account. But he has always maintained a modest style and has not posted any tweets that I think are untrue or too enthusiastic. Yang Likun occasionally publicized his research, but nothing special. Similarly, bengio is not active on social media

other people I like best are also very quiet. Last fall, Li Feifei left Google cloud and returned to her job at Stanford University. Wu Enda is unusually quiet: maybe it's because he recently had a child - I sincerely congratulate him and his wife. Two months ago, I also had a child myself. I know how happy it is, but I also realize that a child may hinder him from working 90 hours a week and lead to the inevitable postponement of the singularity of artificial intelligence

in the past few months, a series of ironic events have occurred in the field of artificial intelligence, which are centered on open AI and Tesla

The artificial intelligence circle is a circus, openai, a non-profit organization whose mission is to solve the problem of general artificial intelligence (AGI) and ensure that their discoveries are open to the public, rather than being taken by some malicious companies for profit

in February this year, they released a text generation model gpt-2. To everyone's surprise, they didn't publish open source because they were worried that it might be misused, which caused great controversy among researchers and AI circles

judging from this incident, I'm not sure how these guys claim that they are "open". But they didn't release key parts about this model. By the way, in my opinion, gpt-2 can also be implemented with crowdsourcing platform before they release the complete model

(translator's note: Connor Leahy, a student from Munich University of technology, spent 200 hours and about 6000 yuan in two months to reproduce the gpt-2 project.)

although the text generated by gpt-2 looks reasonable, I'm not sure how people can abuse it to generate fake or spam, or really use it for anything other than entertainment

however, this organization called "openai" is obviously no longer open. The so-called non-profit organizations also put forward an idea of how to make profits

is an organization that should be similar to "Prometheus in the 19th century". It should be a monastery composed of unbiased and impartial researchers. The organization that strives to provide the flame of artificial intelligence for the whole mankind is no longer open, but behind it is actually for profit

however, they still adhere to these ambitious missions, because profits will be limited - each investor can only get a return of 100 times at most. Now, I can think of the reason behind this is that they can no longer raise funds as non-profit organizations

let's put aside the fact that so far, the profit of this company is almost zero, and its structure is not like any imaginative start-up company (but like a research laboratory)

Sam Altman, who has a wide range of contacts in Silicon Valley, used to be in charge of running y combiner, an entrepreneurial incubator, and is now the CEO of openai. This year's government work report once again cheered the real economy

, and I found some wonderful content about the "hype cycle". I hope you can see the complete interview. Here, I only quote the more exciting part:

for example, when asked how openai plans to make money, Altman replied, "the honest answer is, we don't know. We have never made money. We don't have a plan to generate income at present. We don't know how we will generate income."

Altman continued, "we have made a commitment to investors, 'once we have established a general intelligent system, basically we will ask it to find a way to bring you a return on investment.

when the audience burst into laughter, Altman himself said that it sounded like an episode of the Silicon Valley TV series, but he added, "You can laugh. It doesn't matter. But this is what I really believe."

yes. I don't know what to add. I am convinced that it should be much easier to figure out how to generate revenue than to figure out general AI. In short, my proposition is: I believe we can build general artificial intelligence. Although I have no evidence, I do believe that if you give us billions of dollars, we can do it

I hope I can believe this, but unfortunately I don't believe it. I think openai has become a complete fraud. By checking the tweets of these openai employees, this judgment 1. Jinan experimental machine factory digital display hydraulic universal experimental machine: the main engine adopts the cylinder down cut-off, which has been further strengthened. For example:

Wojciech Zaremba, who was very rational at New York University (NYU) (I wrote several emails with him in 2015), has now become a "believer"

there are too many errors in the above tweet. It's even hard to imagine that this is what a scientist says when he understands data and reality

but 6. Graphics: appropriate graphics and tables with user-defined graphics, labels and automatic adjustment proportion are that these seemingly intelligent people believe in the information of echo chamber in the San Francisco Bay area, and thousands of young people are blindly following it

specifically, the method described above (I'm not even sure what "method" means) may not work in millions of cases (and there is a lot of evidence that it does not work)

a potential reason is that even if they have all the data, they are likely to be biased, because people will definitely avoid edge situations when driving. Another potential reason is that edge cases are a very sparse set, which will be statistically submerged by "non edge cases"

in the process of deep learning, all data points will be comprehensively optimized for loss, which may completely eliminate the edge condition. Each edge situation may require its own data rebalancing/loss function tuning

another reason is that even if 500000 teslas are driving on the road, even if they collect all driving data (in fact, they do not), if the edge situation is long tail and unstable, it is difficult to cover all edge situations

(another way to imagine is that you can train a model based on historical stock market data and use it to guide you to speculate in stocks, and you will go bankrupt faster than you can imagine)

fourth, the marginal situation of human beings on the road is different from that of autonomous vehicle on the road, and it is indeed changing

another perhaps the least obvious reason is that the specific deep network they are using may not have enough expressive ability to express what it needs to express (even assuming that they have all the data needed to train it)

perceptron has been known since the 1980s. It is a general-purpose approximator. However, it requires a very specific fine-tuning structure, called convnet, and a series of skills to improve convergence and a sufficient number of trainable parameters to solve Imagenet

even today, you can't just use a universal multi-layer perceptron, and expect it to learn any task you ask it to do. Even if you have a lot of data, you also need a lot of tuning and meta parameter search to find a solution to the problem

finally, even if all this is feasible in principle, we don't know whether it can be achieved under the limitation of Tesla's built-in computer

other players in the field of autonomous driving are stuffed with lidar in their cars. Although expensive, it solves most of the problems Tesla tries to solve through artificial intelligence

according to our data, the mature game GPU provides more computing power than Tesla's latest hardware. Obviously, no company is close to fully autonomous driving

this is why Tesla's self driving method may not work. Every competent data scientist can recite it directly even after drinking a bottle of vodka

speaking of Tesla's "autonomous day", they made some quite bold statements. For example, Tesla will have 1million autonomous taxis in 2020. In my opinion, this possibility is zero

, I don't want to spend too much time on this post. (frankly speaking, I don't want to attract the attention of Tesla fans. I hope they live in their own fantasy world.)

let's focus on Tesla from another perspective

something interesting happened earlier this year: Lai, a research scientist at the Massachusetts Institute of technology

Copyright © 2011 JIN SHI