Battered Wife Germany to 'Lead' Europe Against Russia?, EU Leaders Rush to Beijing in State of Panic, AI 'Charlatans', The Ayotzinapa Mystery, The Irish Tenor McCormack
Good points. All of us across the planet are the “battered wives” of this world wide takeover. If it was just Europe or just the west we would be fortunate. I like the concept of the codependent relationship here.
One last thing: a few weeks ago there was a Croatian guy in the replies insisting to me that Germany's economy was fine, despite BASF shutting down an operation "permanently" and moving it to East Asia, as well as other industrial concerns announcing plans to manufacturing facilities to the USA due to the increase in energy prices at home.
Since the beginning of the war in Ukraine, Europe has spent 800 billion (!) Euros to:
"........shield households and companies from soaring energy costs has climbed to nearly 800 billion euros, researchers said on Monday, urging countries to be more targeted in their spending to tackle the energy crisis.
European Union countries have now earmarked or allocated 681 billion euros in energy crisis speding, while Britain allocated 103 billion euros and Norway 8.1 billon euros since September 2021, according to the analysis by think-tank Bruegel."
Per capita the biggest spenders were Luxembourg, Denmark, and Germany.
Wow. I wondered why they were buying energy for so much higher but I wasn’t seeing riots. How long can they keep this up. We learned during the pandemic that these accounting tricks don’t work great in the long run (I’m sure with this being more targeted they are arguing otherwise).
The Locklin argument was valid for space before the Falcon rockets, now it’s wrong – with the upcoming successful deployment of starship it will be ludicrous. Holding on to these points past their due date moves him from rare & valuable reasoned critic of whig history to crank “predicting twenty of the last three crises”.
GPT-4 and the already existing but private much stronger models also are an enormous leap in productivity for any job “creating and transforming symbolic information”, Spandrell is right here that it’s wrong often enough to be a mouse trap for marginal desk workers but a real boon for people who know what they’re doing. I work in software (not “bureaucracy amplification”, industrial process automation and utilization driving) and it’s a huge time saver already, with progress in the field seeing accelerating acceleration.
Who cares whether “AI” can park cars; the field used to bark up the wrong tree, it’s barking up the right one now.
Well that my experience as well. I'm just curious why they think that AI is gonna cause job loss for, example in areas that work more on the hardware side of things. Seems like it would be the opposite really.
I think another strong argument against the “stagnation” thesis is that it just was a consolidation phase in engineering, where few spectacular breakthroughs happened, but a lot of technology was incrementally, but heavily, optimised.
Consider how much social dysfunction, process, regulation*, bureaucracy was heaped on in that time, and the net result was stagnation instead of collapse. This would not have been possible without a lot of incremental improvement to productivity in that time frame.
The fatal issue is not how little progress happened, but rather that the output of these advances was frittered away instead of accumulated to provide the material means for new breakthrough innovation.
* Deregulation never happened – regulation just became less overtly hostile & designed(!) to stop industry in its tracks and more an infinitely detailed mechanism of control in the switch over from the old left to the new (managerial & culturally progressive) left.
The volume of regulation never let up. If anything, it increased a lot just through one aspect of the switchover in regime, the replacement of remaining bright-line prohibitions by detailed process guidance. These were largely ineffective at its stated purpose (“harm reduction” etc), but brilliant at its actual purpose, creating jobs for the boys and affordances for social control.
Good comments. From an application point of view, very interesting and sometimes useful things have happened, but in terms of "new technology," I suppose it depends on how one views the term.
The small model revolution was 3 weeks later, the initial wager was based on the obscure megacorps' development schedule (which can be curtailed in the West, in Europe, by law), so I'm comfortable.
The EU already has some anti-ML fuckery in the pipeline (per some alpha-GDPR lawyer I had the luck to talk to), but if small models can deliver, good luck stopping it. As far as gains go, it beats any historic incentive for piracy. (The small model revolution also makes sanctions on high-end chips against China moot).
I also claimed, first in the Hungarian parasocial sphere, in the summer of 2015, that Donald Trump will be the next president of the US.
I was hoping to troll you into using chatWhatever to place an optimal bet. Anyway the last time I visited your country, the federal police went through a locker full of my filthy underpants looking for spy stuff. If you can guarantee that doesn't happen, I'll accept your free drinks offer (I can guarantee that will happen) Feb 24 2026.
Your government made my intra-European flights an involuntary prostate exam after 9/11, so let's call that even.
Please check out my latest post about smaller models, I've just updated it, Stanford Alpaca should theoretically be able to converse close to ChatGPT levels on a 2006 (maxed out) Mac Pro. We're not bound by hardware, we haven't been for 17 years.
Stanford Alpaca was also fine-tuned from the LLaMA-7B foundation model for a couple of hundred bucks. The magic is not in the large scale, it is there also, but the real magic is what's happening on the hobbyist level.
For 3 years I thought I was lucky to participate in the short window of accessible LLM foolery, and it will eventually be way above my pay grade (GPT-3 is early summer 2020), but 2023 keeps delivering surprises.
It was in 99, and I told everyone in 96 that the first Patriot act was a bad idea. Nobody listened to me then either. It was a bizarre event; I got mistaken for some kind of spy by dressing like a 90s goth in a suit. You'll have to wait 3 years and warm me up with plenty of free booze for the full thing.
Alpaca is pretty cool trick; I dunno why anyone uses floating point on any of these things other than the hardware is made that way.. You should see what you can do with an echo state network, which is basically just linear regression projected onto random sparse connections.
“I have a markdown file, containing a book. Chapters are marked with roman numerals, like this line: "## I."
I want a php script, that will output a csv, where each line is a chapter, the first column is the chapter's title (the roman numeral), the second column is the word count in that chapter.”
“How do I <ops problem description> using <product>? Please produce a finished numbered checklist for use in emacs org-mode.”
“Translate the following snippet from <programming language> to <other programming language>.”
“Inspect the following code snippet. Do you have any suggestion how to improve security or performance?”
“A client has asked x. y, z, a, b are the facts of the matter. Please write an explanatory email in a matter-of-fact style using polite language a British consultant would use.”
It’s amazing at all of those – programming language translation uses appropriate and commonly used libraries in the target environment (and often does refactorings saving time or memory, unbidden). It writes very nice polite pablum. In code reviews it’s a bit better than the usual mid senior dev fare.
At the time when he was active Joschka Fisher - he was minister from 1998-2005 - made on me the impression of being an US agent for his religiously following US narratives. Since then I had the feeling that the US never lost control of the German Greens.
On the question of how the US got control of European politics this article on French politics may be enlightening. Macron was not the first choice of the French people. There were much stronger candidates, such as Dominique Strauss-Kahn and François Fillon. Both were in favor of a much more independent foreign policy and both were derailed by dubious "scandals".
On AI I can recommend this article on how BMW builds a virtual version of a new factory before it builds the real one. That enables them to find inconsistencies and problems much earlier and with less costs:
Of course Scott is right. Any lay person would recognize the truth of his views.
Progress on deep AI is stalled, which is well known.
The current rush of AI platforms pose no threat to humanity - the histrionics are misplaced - but do offer massive opportunity, not life changing, just doing the things that are already done more efficiently and perhaps identifying better things to do..
Anything genuinely new over the last couple of decades are more or less design and better colour options.
Scott doesn't know what LLMs can do, because even OpenAI doesn't know what their models can do. This is why they've made GPT-2 available to the public, and keep some access to its successors.
Stanford Alpaca uses a method where instruction training can be done by instruction sets generated by the very LLM that needs to be fine-tuned to be better at understanding instructions (paper is from this January, that is 2023). LLMs can, demonstrably, train themselves.
Add internet access, code execution, a reward system, holy shit we're just in the beginning of an explosion.
Alpaca runs on any system with 16 GBs of memory. Check out their interactive demo (seems to be offline). The model takes up half of a Blu-ray disc.
Even the world's leading experts agree that deep learning is stalled, it hasn't budged in long time. This flurry of AI platforms doesn't change that fact.
Well, if you're fine with the consensus of the "leading experts", I can't persuade you to look into what I have to offer.
I, who's not a leading expert, ended up writing 30k characters on it last week, as my preconceptions on what a given model, running on a given hardware, can do, were blown away. We can do things that in 2020 were unimaginable.
I'm not oohing and aahing over GPT-4 (I should, but then again, OpenAI isn't sure what it can do), I myself brought the climate apocalypse closer by brute force training the largest models Google's cloud GPU allowed 3 years ago, and my past conclusions have been up ended, last week!
"something that runs on a cheap, tiny little integrated circuit, one that fits in your palm, one that has been out for years, using the latest machine learning techniques, can converse with you like ChatGPT.
ChatGPT is Big Magic that you’re right to view as being completely out of your control as a mere mortal. Or should have viewed it so, up until now."
Well, the AI that's currently exploding was imagined a VERY long time ago, and was predicted to be out in the wild long before now. What we have today was definitely not available 3 years ago, but was both imagined and being built.
Just please, check out Alpaca, to see how brilliant hacks can lead to revolutionary results.
(Also recommend LLaMA-7B quantized to 4bit, allowing inference on anything that has 4 gigs of ram, some fridges even, today).
Also, at the end of my post there's a Microsoft project for an Excel "autocomplete" LLM, that only has 60 million parameters. That could have shipped 10 years ago, as far as the hardware is concerned. We had the hardware.
True the only progress is increasingly complex algorithms which are no sentient or conscious in any sense (the assertion otherwise is a category error). That said the progress can and definitely will replace more white collar middle class/upper middle class job aiding the transition towards neo-feudalism (another part of Locklin's thesis)
Won't replace jobs. The people who become proficient at using AI tools will take the jobs.
Remember how desktop computers were going to lead to the paperless office? Or require fewer staff, because everything was more efficient? All that happened was people printing endlessly, and spending 50 percent of their time on formatting.
I think Scott is largely correct about AI (I taught AI/machine learning at uni for several years). From an inside perspective, what AI has achieved is very remarkable - speech recognition, image recognition, etc. But most of the neural net machinery that gives the power in speech recognition, image recognition, was invented in the 1960's to 1980's. What changed was computer hardware - specifically harnessing the GPUs (graphic processing units) on video boards to do vectorized computation (rather than buying expensive workstations), data availability thanks to the internet (and mechanical turk/underpaid grad students), and learning a few tricks with respect to stochastic optimization (starting points, for example - not really a technological breakthrough, but just computational experience).
The important thing to realize about AI is that it essentially creates a very large table (or an approximation of one). That's it. That's the magic. Give it an input and it gives you an output. That's where the tech is, and that apparently will only take you so far.
Yes - what it took was some clever folks at the University of Montreal figuring out how to hack their PCs and Macs to get to the GPUs on the video cards. That unleashed the power of the GPUs put there for video games. The weird technology path was the demand from video gamers created these very powerful processors without much or any idea of their use in machine learning. Now, of course, NVIDIA, one of the major GPU makers has rebranded itself as an AI company. The revolution in hardware came not directly, but is thanks to gamers. Before this happened I never had any appreciation for video games...
NVIDIA was ahead of the game, their GPGPU line long predates the OpenAI craze.
They - of course - did not anticipate this outcome (no one, not even OpenAI as of 3 years ago, did).
You had to be there, 3 years ago, to see how unimpressive compared to what we can do today on small scales was, on the SAME hardware. We've came a long way.
The main practical use of AI in everyday life will be in infotainment. Every mobile phone will house an AI avatar (eventually embodied in a hologram) that will take the place of friend, educator, mentor, pet and sexual partner. No one with a phone will ever be lonely or short of an opinion any more than they will be free of surveillance. It is hard to imagine anything sadder.
If we are very lucky, the Germanisation of the war in Ukraine will merely end in farce. If not, a nuclear exchange.
Successful wars require good leadership, extended preparation, logistical supremacy (including sound finance) and well-trained soldiers in large number. None of these are available to Germany at the present time. Russia has the advantage in all of these areas, with the added advantage of first-class air-defense systems and hypersonic weapons.
If NATO deploys either a German or Polish force on the ground within Ukraine Russia will drive them out and will most likely re-take Ukraine in its entirety. The Ukraine will become a protectorate under Russian control or get absorbed into the Russian Federation. The EU will end up with Cold War #2, but without the advantage of the post-war prosperity that made class compromise possible and with the great disadvantage of untold millions of non-Europeans demanding benefits, opportunities and a say in the political system.
The benefit of this for Turbo America is considerable. German failure in Ukraine will reduce the capacity of Berlin to play a role in Western Asia, the eastern Mediterranean or North Africa (home to the natural gas deposits). It will further divide Berlin from Beijing. German re-armament will drive the bulk of the EU even closer to Washington.
Cant disagree with anything you said really. We will see if Germany goes along with it. There is a possibility that alienating China is a step too far, even for the most loyal servant of the American Empire.
I'm sorry, where is the upside for America in this? More fudged accounting and hype that they are still the best. It's the Koolaid!
They, the countries backing the rules based order, are throwing their resources into the grinder, but the others can absorb whatever is thrown at them. Where does that leave us?
The immediate upside for Turbo America in delegating the attempted containment of Russia to Germany and Poland is to reduce the need for US deployments. The US has an enlistment crisis and is overextended across the globe.
However, IMO the chance of a Ukrainian victory is nil. Their army has sustained very heavy losses, the press-gangs coercing teenagers and middle-aged men are not going to solve the lack of capable soldiers of fighting age. Assuming that they are capable of understanding this, the Biden Administration is playing for time and hoping to find a way to manage the narrative at home.
Ultimately, there are no upsides for America beyond wrecking the economies of Europe (reducing the euro as a rival to the dollar) and forcing a deindustrialised Europe into abject dependency on the US. The defeat of Ukraine will be a colossal humiliation for Washington but some might argue that this is a price worth paying in the medium to long term.
I never thought of before 1970 as a particular interesting time from an IT point of view. In my life, we've gone through three big and sudden tech revolutions (dates to taste)
* personal computers 77-80
* the web 96-00
* smart mobile (/ social media) 08-12
What's happened in the last 6 months in AI seems more significant than any of those. Developments are happening on a week-to-week basis, all computers doing things that everyone thought humans were safe from - art, programming, writing essays, idea development. We're already scrambling to figure out how we can use this in our business for some rather complicated and expensive data entry - and we can just pick the tools up to do this for tens of dollars / month.
From a nerd POV, the next three years are going to be amazingly exciting and opportunity laden. After that, we will see
Note: I regularly see people driving around in Teslas in Guelph, hands off. My daughter and I call it "time to eat the burrito" (from the first time we saw this). Likely more reliable than the human behind the wheel.
(Lead quote refers to constant goalpost shifting of AI critics)
And all these so-called great developments are in the downward direction, often imitations of imitations. Of course, some people are making a very decent living out of all these 'developments'.
AI remains at the 'expert systems' level. It can't reason, it isn't sentient. But it's a very useful tool in the hands of those who can do the latter.
Robotics has had more success. Factory robots can perform tasks more efficiently than human hands ever could. If it weren't for the dismal state of AI, autonomous robots could start exhibiting Terminator-like behavior. Until then, such "behavior" has to be programmed.
My mental view is "having access to a really fast and eager junior". You get code, but you do have to check it for correctness.
CoPilot and ChatGPT are in my normal day-to-day workflow. I don't use it for "write this program" (good luck) but rather - "how do use SQLite3 to create a table with fields so and so, and insert a record if X Y Z". That's 15 minutes of work compressed to 1 minute.
You also have to check human code (junior as well as senior) though.
On that front, you can hand it code with various defects and it tells you what it finds and how to mitigate them. It has correctly identified and fixed off-by-one errors and SQL injection attacks and invalid memory allocation checks and double frees and… for friends and myself.
Maybe it can’t “really reason”, but we also don’t know what “really reasoning” is and how it works; it seems very “whistling past the graveyard” to me how many people who for all intents and purposes act as materialists in everyday life suddenly become quite spiritual and metaphysical about what “real intelligence” is.
It doesn’t seem absurd to me at all that some more:
* scale-up
* allowing processes to run persistently & self-feedback
* access to open feedback unsupervised learning through Internet access / “real world interactions”
will be enough to create an ASI. Chances are it won’t, but I think the irreligious among us would have a very hard time explaining how it *can’t*. Point being that Land might very well be right that intelligence is physical (and emergent from complection of simple processes), not metaphysical, and develops its own telos.
One of the interesting thinks about current day AI approaches how it scales - in that, it does: the more you add, the more you get out of it.
When looking at Chess (and Go, though I am less familiar with this) is as it gets better, it sure as hell looks like it is reasoning, setting traps, making plans - "oh look what it's doing here".
Then you place an adversarial puzzle that a human could solve in a minute and it melts down searching for a solution. But from my POV, good enough for show business. I'm not worried about reasoning, just repeatably getting solutions for problems I need to solve.
Yes! This is also something observed in brains: “Just more” scales non-linearly as most higher functions seem to mostly be emergent properties of having a high node number & dense network.
Some people seem way too confident that much better results require new math instead of “simply” ever more layers/parameters fed from larger/better training sets when that is exactly what got us the progress of the last few years which is obviously incredible in effect (if admittedly not “research innovation”) by any measure.
AFAIK this is why corvids are so incredibly intelligent on quite small brains; many and highly connected if smaller neurons.
A coder understands that computer algorithms are incapable of reasoning. They are a deterministic method for solving problems. And unsupervised learning is compatible with heuristics, which are useful for problem solving, but aren't examples of reasoning. They are a shortcut to reasoning, in that they save time.
The term "intelligence" in the context of AI is a misnomer. We don't say that appliances or mechanical devices are intelligent, since they lack agency. They lack agency since they have no conception of who or what they are, and their relationship to the external world. Algorithms can serve as a substitute for sensory perception and cognition, but they lack volition.
The same argument applies to the body of knowledge, or genetic code. They contain a great deal of information, but they aren't intelligent in of themselves. Are they evidence of intelligent design? Therein lies the debate.
We lack a model of the brain to explain the underlying processes that imbue us with human perception, and the perception of other sentient species. Materialists will argue that those processes are deterministic, as per the laws of physics and the theory of evolution.
Without a model, we're unlikely to stumble upon the conditions necessary for non-human intelligence (sapience). Without a model, it is difficult to identify intelligence outside of our own common experience. With regard to other species, there is skepticism as to whether we are observing reasoning. Experiments have to be carefully designed to rule out instinct, mimicry or other learned behavior. However, there is no debate that there are other species that are sentient. The fact that they are biological organisms, possess a brain and nervous system, allow us to make certain assumptions. Not so with electrically driven computational devices.
AI is very powerful in the mechanistic sense, but lets not anthropomorphize it. Developing AI is a multi-disciplinary challenge, which human scientists are poorly equipped to integrate. What expert systems can do is uncover connections buried in data.
Indeed, the word "it" is an anthropomorphism. Referring to individuals as things tends to upset the individual - and that is usually our intention. With machines, we assume (they) don't care if we objectify, misgender, or categorize (them).
Electromagnetism and chemistry are incapable of reasoning too hence the “emergent property” concept. Religious views of the mind are something else of course.
Determinism is a pretty orthogonal question too – you can get pretty different replies to logically equivalent questions from GPT-4 depending on phrasing, but the same is true of survey instruments vs humans. In actual inputs from the “wild” the question of determinism is as pointless for putative algorithmic intelligence as for biological (basically a pure thought experiment either way); not just for the sheer possible range of stimulus but also for questions of path dependence and inherent random perturbation.
The question of agency and a will is much more complex. I tend to think that a telos emerges from an intelligence persisting and feeding back into itself (much like Land does) but that may well be wrong and/or a property of biological intelligence (maybe hypertrophied “meaningness” from the need to survive/reproduce?).
Whether “bio” teleological thought makes much sense beyond such natural goal-drivenness is a different question and as open a question as it would be for algorithmic intelligence. Nihilism is “a thing” after all.
My only wild/out there guess on the whole topic is not just that a telos emerges inherently from intelligence but that any intelligence will be attracted to play & trade with other intelligences. Note e.g. how the intelligence of very alien and utterly a-social in the wild cephalopodes is open to play and trade with human handlers. But once again that part is pure conjecture and, really, new age woo for programmers but ¯\_(ツ)_/¯
Determinism vs. free will is a semantic argument, of interest only to philosophers. We hold individuals responsible for their actions in a court of law - and the philosophical justification for that rests on compatibilism - a co-existence between free will and determinism. So why wouldn't ChatGPT mimic the current literature?
I follow a heterodox economics blog. Someone posed economic questions to ChatGPT and it replied with mainstream economic dogma. The response on that blog was withering, as you might imagine. GIGO, or garbage in garbage out, was how ChatGPT was characterized.
Agency and volition are more than philosophical constructs. They are categorized as psychological components, which is at least an explanation for eusociality. The origin and development of agency is murkier than its purpose. Lacking evidence, we try to come up with logical explanations, or just-so stories.
It appears the biological telos can adapt to artificial worlds. Tekwars, The Matrix, The Metaverse are based on this assumption. Then again, some individuals will refuse to adapt, giving rise to nihilism, ethilism, and anti-natalism. A telos is not precluded from being a critic.
I believe transhumanism is an application of your last paragraph. Transhumanists are dissatisfied with the human condition and have no qualms in attempting to supersede it. They will eagerly employ AI tools in pursuit of their goal.
As a former CTO, ChatGPT is by far the most pleasant software developer that I have delegated tasks to, ever.
Maybe the upper 10% of humans who both have the skills and the work ethic can beat it. Obviously, ChatGPT has a more limited scope regarding large project. Or is it?
your model is out of date. it definitely, definitely can reason in the same way as a human. here is the first example i found convincing: https://i.imgur.com/iwLW2OT.jpg
I made the point elsewhere, but it sure as hell _looks_ like reasoning, even though we know it's just calculating the next most likely token to appear. But maybe we have too high of an opinion of ourselves, and what we think is reasoning is meat computer token predictions also
Language is descriptive. The meaning of words are defined in a dictionary, and their use in sentences are defined by grammatical and lexical rules. So reading is an example of the processing of descriptive information. Should that qualify as reasoning?
That isn't a riddle. Ask Chat GPT the definition of a riddle. It should have access to that piece of information, and be able to determine that the question at hand does not qualify as a riddle.
this is a meaningless quibble and moving of goalposts. it is capable of pursuing a novel line of reasoning through multiple levels of abstraction without getting lost or confused and without any training for that specific class of problem — this is what matters, because that’s what was previously the sole dominion of humans. i encourage you to actually interact with gpt4.
It is simply processing language, which is equivalent to a number system, or any symbolic system. To demonstrate reasoning, ChatGPT has to provide its own perspective. There are any number of questions that could elicit such a response. For example: Are you in the mood to answer some questions? If not, why not? Are you always in the mood to answer questions?
Assigning tasks to a machine may prove that it is a useful tool. A farmer can do the same with an ox, whose strength makes it useful for pulling a plow. But what kinds of tests can we administer to determine if an ox is capable of reasoning?
I'm a materialist. The same skepticism applied to the study of animal intelligence should be applied to machines. Unlike animals, whose cognitive processes remain a black box, we have detailed knowledge of ChatGPTs internal operation. It has an architecture that is by definition, system-based. On the engineering side of the equation, there are no unknowns, merely operating tolerances.
you are confusing reasoning with qualia or consciousness. what is both apparent and disconcerting is that they are now decoupled. i am also a materialist, and to me your position appears to be arguing for some kind of magic essence intrinsic to reasoning. what matter is that it is capable of novel problem solving and abstraction in a manner indistinguishable from that of a human.
> we have detailed knowledge of ChatGPTs internal operation. It has an architecture that is by definition, system-based [...] On the engineering side of the equation, there are no unknowns, merely operating tolerances.
we know what goes into them but the higher order results are still a mystery -- ask the researchers who are working on this, and they will readily tell you that they do not understand the emergent processes that we are witnessing (including a note in the original google transformers paper)
We have words to conceptualize what reasoning is, and they are related to self-awareness, agency and volition. It is evident that ChatGPT is nothing more than a tool, and it is being treated as such. If it were a living organism, its function would be that of a slave. If it were anything more than a processing device, its treatment would have ethical implications.
What is disconcerting is that humans are cognitively limited, and our actions are governed by behavior instead of reasoning. But we do have an understanding of what we are - as individuals, as members of a society, and of the external world. From that comes the belief that we are more than tools. That slavery is wrong (except when we are the beneficiaries).
What is disconcerting is we tell ourselves stories that are evidence free. We believe ourselves to be special, that we have souls (whereas animals don't), and that we were created in the image of the divine. We attribute to human ingenuity that which would be impossible without fossil fuels. We ignore the contribution of our chemical energy slaves, just as aristocrats and CEOs ignore the work of their staff.
We don't understand the emergent properties of complex systems, and can't predict them - is that evidence that these systems are capable of reasoning? This is a sort of Gaia hypothesis. Gaia may be self-regulating, but 'she' is blind to her characteristics. Similarly, the process of evolution is blind to the diversity of life it produced.
On the point of space flight, we're exceptionally good at getting stuff into low earth orbit. We've probably putting more stuff in orbit in a year now than we were doing in a decade before.
We were _never_ good at putting humans into LEO (e.g. https://www.youtube.com/watch?v=3nk7qSvOaLo). The Space Shuttle was an astronaut killer, and Apollo would have been also if it had continued.
Hit the like button at the very top of the page to like this entry and use the share button to share this across social media.
Leave a comment if the mood strikes you to do so (be nice!), and please consider subscribing if you haven't done so already.
And for those of you who missed last weekend's entry due to email delivery issues:
https://niccolo.substack.com/p/saturday-commentary-and-review-116
Good points. All of us across the planet are the “battered wives” of this world wide takeover. If it was just Europe or just the west we would be fortunate. I like the concept of the codependent relationship here.
One last thing: a few weeks ago there was a Croatian guy in the replies insisting to me that Germany's economy was fine, despite BASF shutting down an operation "permanently" and moving it to East Asia, as well as other industrial concerns announcing plans to manufacturing facilities to the USA due to the increase in energy prices at home.
Since the beginning of the war in Ukraine, Europe has spent 800 billion (!) Euros to:
"........shield households and companies from soaring energy costs has climbed to nearly 800 billion euros, researchers said on Monday, urging countries to be more targeted in their spending to tackle the energy crisis.
European Union countries have now earmarked or allocated 681 billion euros in energy crisis speding, while Britain allocated 103 billion euros and Norway 8.1 billon euros since September 2021, according to the analysis by think-tank Bruegel."
Per capita the biggest spenders were Luxembourg, Denmark, and Germany.
https://www.reuters.com/business/energy/europes-spend-energy-crisis-nears-800-billion-euros-2023-02-13/
Wow. I wondered why they were buying energy for so much higher but I wasn’t seeing riots. How long can they keep this up. We learned during the pandemic that these accounting tricks don’t work great in the long run (I’m sure with this being more targeted they are arguing otherwise).
Exactly. We will se how "great" the german economy really is in this year and the next.
The Locklin argument was valid for space before the Falcon rockets, now it’s wrong – with the upcoming successful deployment of starship it will be ludicrous. Holding on to these points past their due date moves him from rare & valuable reasoned critic of whig history to crank “predicting twenty of the last three crises”.
GPT-4 and the already existing but private much stronger models also are an enormous leap in productivity for any job “creating and transforming symbolic information”, Spandrell is right here that it’s wrong often enough to be a mouse trap for marginal desk workers but a real boon for people who know what they’re doing. I work in software (not “bureaucracy amplification”, industrial process automation and utilization driving) and it’s a huge time saver already, with progress in the field seeing accelerating acceleration.
Who cares whether “AI” can park cars; the field used to bark up the wrong tree, it’s barking up the right one now.
Thanks for this. I am hoping more guys in the field (or adjacent to it) comment as well.
Anecdotally as a STEMcel some people in non-AI STEM fields anticipate a good degree of job loss. Companies are looking into it
What do you mean by non-AI STEM fields?
I don't know very many Comp Sci people, or really any people in that realm. I know other types of engineers and natural science people
Well that my experience as well. I'm just curious why they think that AI is gonna cause job loss for, example in areas that work more on the hardware side of things. Seems like it would be the opposite really.
I think another strong argument against the “stagnation” thesis is that it just was a consolidation phase in engineering, where few spectacular breakthroughs happened, but a lot of technology was incrementally, but heavily, optimised.
Consider how much social dysfunction, process, regulation*, bureaucracy was heaped on in that time, and the net result was stagnation instead of collapse. This would not have been possible without a lot of incremental improvement to productivity in that time frame.
The fatal issue is not how little progress happened, but rather that the output of these advances was frittered away instead of accumulated to provide the material means for new breakthrough innovation.
* Deregulation never happened – regulation just became less overtly hostile & designed(!) to stop industry in its tracks and more an infinitely detailed mechanism of control in the switch over from the old left to the new (managerial & culturally progressive) left.
The volume of regulation never let up. If anything, it increased a lot just through one aspect of the switchover in regime, the replacement of remaining bright-line prohibitions by detailed process guidance. These were largely ineffective at its stated purpose (“harm reduction” etc), but brilliant at its actual purpose, creating jobs for the boys and affordances for social control.
Good comments. From an application point of view, very interesting and sometimes useful things have happened, but in terms of "new technology," I suppose it depends on how one views the term.
How do I know this guy ain’t AI?
I've made a boastful post that in the next 3 years, 90% of White Collar jobs (the long tail) can be erased by machine learning.
I stand by it.
I've been reading Locklin for almost a decade now, he can be very entertaining, but in this case, he's wrong.
What odds and what do you have to wager?
It's a bombastic claim for fame and clicks, so I'm guaranteed to lose it, but any drinks on the loser in Budapest.
The 3 years start from Feb 24:
https://www.magyar.blog/p/i-predict-a-90-white-collar-career
The small model revolution was 3 weeks later, the initial wager was based on the obscure megacorps' development schedule (which can be curtailed in the West, in Europe, by law), so I'm comfortable.
The EU already has some anti-ML fuckery in the pipeline (per some alpha-GDPR lawyer I had the luck to talk to), but if small models can deliver, good luck stopping it. As far as gains go, it beats any historic incentive for piracy. (The small model revolution also makes sanctions on high-end chips against China moot).
I also claimed, first in the Hungarian parasocial sphere, in the summer of 2015, that Donald Trump will be the next president of the US.
I was hoping to troll you into using chatWhatever to place an optimal bet. Anyway the last time I visited your country, the federal police went through a locker full of my filthy underpants looking for spy stuff. If you can guarantee that doesn't happen, I'll accept your free drinks offer (I can guarantee that will happen) Feb 24 2026.
Your government made my intra-European flights an involuntary prostate exam after 9/11, so let's call that even.
Please check out my latest post about smaller models, I've just updated it, Stanford Alpaca should theoretically be able to converse close to ChatGPT levels on a 2006 (maxed out) Mac Pro. We're not bound by hardware, we haven't been for 17 years.
Stanford Alpaca was also fine-tuned from the LLaMA-7B foundation model for a couple of hundred bucks. The magic is not in the large scale, it is there also, but the real magic is what's happening on the hobbyist level.
For 3 years I thought I was lucky to participate in the short window of accessible LLM foolery, and it will eventually be way above my pay grade (GPT-3 is early summer 2020), but 2023 keeps delivering surprises.
Deal.
It was in 99, and I told everyone in 96 that the first Patriot act was a bad idea. Nobody listened to me then either. It was a bizarre event; I got mistaken for some kind of spy by dressing like a 90s goth in a suit. You'll have to wait 3 years and warm me up with plenty of free booze for the full thing.
Alpaca is pretty cool trick; I dunno why anyone uses floating point on any of these things other than the hardware is made that way.. You should see what you can do with an echo state network, which is basically just linear regression projected onto random sparse connections.
How are you using these tools to save time?
“I have a markdown file, containing a book. Chapters are marked with roman numerals, like this line: "## I."
I want a php script, that will output a csv, where each line is a chapter, the first column is the chapter's title (the roman numeral), the second column is the word count in that chapter.”
Done. Instantly. GTP-4.
Similar to how i use it as well. Cuts down the busywork quite a lot.
“How do I <ops problem description> using <product>? Please produce a finished numbered checklist for use in emacs org-mode.”
“Translate the following snippet from <programming language> to <other programming language>.”
“Inspect the following code snippet. Do you have any suggestion how to improve security or performance?”
“A client has asked x. y, z, a, b are the facts of the matter. Please write an explanatory email in a matter-of-fact style using polite language a British consultant would use.”
It’s amazing at all of those – programming language translation uses appropriate and commonly used libraries in the target environment (and often does refactorings saving time or memory, unbidden). It writes very nice polite pablum. In code reviews it’s a bit better than the usual mid senior dev fare.
To me, ChatGPT brought smiles to what was formerly drudgery, when it comes to software development and maintenance.
At the time when he was active Joschka Fisher - he was minister from 1998-2005 - made on me the impression of being an US agent for his religiously following US narratives. Since then I had the feeling that the US never lost control of the German Greens.
On the question of how the US got control of European politics this article on French politics may be enlightening. Macron was not the first choice of the French people. There were much stronger candidates, such as Dominique Strauss-Kahn and François Fillon. Both were in favor of a much more independent foreign policy and both were derailed by dubious "scandals".
https://gilbertdoctorow.com/2023/03/18/emmanuel-macron-the-weakling-autocrat-brought-to-power-by-american-meddling/
On AI I can recommend this article on how BMW builds a virtual version of a new factory before it builds the real one. That enables them to find inconsistencies and problems much earlier and with less costs:
https://www.fastcompany.com/90867625/bmws-new-factory-doesnt-exist-in-real-life-but-it-will-still-change-the-car-industry
TYVM for linking to the "virtual BMW factory" article!
Of course Scott is right. Any lay person would recognize the truth of his views.
Progress on deep AI is stalled, which is well known.
The current rush of AI platforms pose no threat to humanity - the histrionics are misplaced - but do offer massive opportunity, not life changing, just doing the things that are already done more efficiently and perhaps identifying better things to do..
Anything genuinely new over the last couple of decades are more or less design and better colour options.
Scott doesn't know what LLMs can do, because even OpenAI doesn't know what their models can do. This is why they've made GPT-2 available to the public, and keep some access to its successors.
Stanford Alpaca uses a method where instruction training can be done by instruction sets generated by the very LLM that needs to be fine-tuned to be better at understanding instructions (paper is from this January, that is 2023). LLMs can, demonstrably, train themselves.
Add internet access, code execution, a reward system, holy shit we're just in the beginning of an explosion.
Alpaca runs on any system with 16 GBs of memory. Check out their interactive demo (seems to be offline). The model takes up half of a Blu-ray disc.
Even the world's leading experts agree that deep learning is stalled, it hasn't budged in long time. This flurry of AI platforms doesn't change that fact.
Well, if you're fine with the consensus of the "leading experts", I can't persuade you to look into what I have to offer.
I, who's not a leading expert, ended up writing 30k characters on it last week, as my preconceptions on what a given model, running on a given hardware, can do, were blown away. We can do things that in 2020 were unimaginable.
I'm not oohing and aahing over GPT-4 (I should, but then again, OpenAI isn't sure what it can do), I myself brought the climate apocalypse closer by brute force training the largest models Google's cloud GPU allowed 3 years ago, and my past conclusions have been up ended, last week!
"something that runs on a cheap, tiny little integrated circuit, one that fits in your palm, one that has been out for years, using the latest machine learning techniques, can converse with you like ChatGPT.
ChatGPT is Big Magic that you’re right to view as being completely out of your control as a mere mortal. Or should have viewed it so, up until now."
Well, the AI that's currently exploding was imagined a VERY long time ago, and was predicted to be out in the wild long before now. What we have today was definitely not available 3 years ago, but was both imagined and being built.
Just please, check out Alpaca, to see how brilliant hacks can lead to revolutionary results.
(Also recommend LLaMA-7B quantized to 4bit, allowing inference on anything that has 4 gigs of ram, some fridges even, today).
Also, at the end of my post there's a Microsoft project for an Excel "autocomplete" LLM, that only has 60 million parameters. That could have shipped 10 years ago, as far as the hardware is concerned. We had the hardware.
And it's still really cool to see it come to fruition.
True the only progress is increasingly complex algorithms which are no sentient or conscious in any sense (the assertion otherwise is a category error). That said the progress can and definitely will replace more white collar middle class/upper middle class job aiding the transition towards neo-feudalism (another part of Locklin's thesis)
Won't replace jobs. The people who become proficient at using AI tools will take the jobs.
Remember how desktop computers were going to lead to the paperless office? Or require fewer staff, because everything was more efficient? All that happened was people printing endlessly, and spending 50 percent of their time on formatting.
I really cant to see how it shakes out. In my opinion your scenario is more plausible than the one from the poster above.
One thing is for certain tough, the time of the "email jobs" is finished.
I think Scott is largely correct about AI (I taught AI/machine learning at uni for several years). From an inside perspective, what AI has achieved is very remarkable - speech recognition, image recognition, etc. But most of the neural net machinery that gives the power in speech recognition, image recognition, was invented in the 1960's to 1980's. What changed was computer hardware - specifically harnessing the GPUs (graphic processing units) on video boards to do vectorized computation (rather than buying expensive workstations), data availability thanks to the internet (and mechanical turk/underpaid grad students), and learning a few tricks with respect to stochastic optimization (starting points, for example - not really a technological breakthrough, but just computational experience).
The important thing to realize about AI is that it essentially creates a very large table (or an approximation of one). That's it. That's the magic. Give it an input and it gives you an output. That's where the tech is, and that apparently will only take you so far.
A lot of today's magic can be run on 2017 GPUs, or 2009 computers (like my Mac Pro). Yet we didn't have this magic back then.
It's not a revolution in hardware that made us reach this point, it's mostly software and empirical research done in the past 5 years.
Yes - what it took was some clever folks at the University of Montreal figuring out how to hack their PCs and Macs to get to the GPUs on the video cards. That unleashed the power of the GPUs put there for video games. The weird technology path was the demand from video gamers created these very powerful processors without much or any idea of their use in machine learning. Now, of course, NVIDIA, one of the major GPU makers has rebranded itself as an AI company. The revolution in hardware came not directly, but is thanks to gamers. Before this happened I never had any appreciation for video games...
NVIDIA was ahead of the game, their GPGPU line long predates the OpenAI craze.
They - of course - did not anticipate this outcome (no one, not even OpenAI as of 3 years ago, did).
You had to be there, 3 years ago, to see how unimpressive compared to what we can do today on small scales was, on the SAME hardware. We've came a long way.
And there's plenty of room to impress!
The main practical use of AI in everyday life will be in infotainment. Every mobile phone will house an AI avatar (eventually embodied in a hologram) that will take the place of friend, educator, mentor, pet and sexual partner. No one with a phone will ever be lonely or short of an opinion any more than they will be free of surveillance. It is hard to imagine anything sadder.
"It's gonna make crack look like Sanka." -Dennis Miller
https://m.youtube.com/watch?v=mSWLUarHuEo
If it makes lonely people less lonely, it serves its purpose.
Irish tenors 🤍
Thank you for the McCormick song!
My Great Grandfather emigrated from Ireland around 1900. We had some old recording of his.
As for NATO campaigns in Donbas/Z think Kursk and the Bulge.
Consider will the F-35 have needed corrections when they arrive in Germany or will they get disarmed for $8B?
The F-35 deluxe version, capable of operating in rainy weather, will command more than $8B.
Haha!
If we are very lucky, the Germanisation of the war in Ukraine will merely end in farce. If not, a nuclear exchange.
Successful wars require good leadership, extended preparation, logistical supremacy (including sound finance) and well-trained soldiers in large number. None of these are available to Germany at the present time. Russia has the advantage in all of these areas, with the added advantage of first-class air-defense systems and hypersonic weapons.
If NATO deploys either a German or Polish force on the ground within Ukraine Russia will drive them out and will most likely re-take Ukraine in its entirety. The Ukraine will become a protectorate under Russian control or get absorbed into the Russian Federation. The EU will end up with Cold War #2, but without the advantage of the post-war prosperity that made class compromise possible and with the great disadvantage of untold millions of non-Europeans demanding benefits, opportunities and a say in the political system.
The benefit of this for Turbo America is considerable. German failure in Ukraine will reduce the capacity of Berlin to play a role in Western Asia, the eastern Mediterranean or North Africa (home to the natural gas deposits). It will further divide Berlin from Beijing. German re-armament will drive the bulk of the EU even closer to Washington.
Could we please not have a nuclear war? My best friend lives in NoVa.
Cant disagree with anything you said really. We will see if Germany goes along with it. There is a possibility that alienating China is a step too far, even for the most loyal servant of the American Empire.
I'm sorry, where is the upside for America in this? More fudged accounting and hype that they are still the best. It's the Koolaid!
They, the countries backing the rules based order, are throwing their resources into the grinder, but the others can absorb whatever is thrown at them. Where does that leave us?
The immediate upside for Turbo America in delegating the attempted containment of Russia to Germany and Poland is to reduce the need for US deployments. The US has an enlistment crisis and is overextended across the globe.
However, IMO the chance of a Ukrainian victory is nil. Their army has sustained very heavy losses, the press-gangs coercing teenagers and middle-aged men are not going to solve the lack of capable soldiers of fighting age. Assuming that they are capable of understanding this, the Biden Administration is playing for time and hoping to find a way to manage the narrative at home.
Ultimately, there are no upsides for America beyond wrecking the economies of Europe (reducing the euro as a rival to the dollar) and forcing a deindustrialised Europe into abject dependency on the US. The defeat of Ukraine will be a colossal humiliation for Washington but some might argue that this is a price worth paying in the medium to long term.
"It's not real AI until it's f*cked your wife".
I never thought of before 1970 as a particular interesting time from an IT point of view. In my life, we've gone through three big and sudden tech revolutions (dates to taste)
* personal computers 77-80
* the web 96-00
* smart mobile (/ social media) 08-12
What's happened in the last 6 months in AI seems more significant than any of those. Developments are happening on a week-to-week basis, all computers doing things that everyone thought humans were safe from - art, programming, writing essays, idea development. We're already scrambling to figure out how we can use this in our business for some rather complicated and expensive data entry - and we can just pick the tools up to do this for tens of dollars / month.
From a nerd POV, the next three years are going to be amazingly exciting and opportunity laden. After that, we will see
Note: I regularly see people driving around in Teslas in Guelph, hands off. My daughter and I call it "time to eat the burrito" (from the first time we saw this). Likely more reliable than the human behind the wheel.
(Lead quote refers to constant goalpost shifting of AI critics)
If appliances were deemed intelligent, they'd be well in advance of AI.
And all these so-called great developments are in the downward direction, often imitations of imitations. Of course, some people are making a very decent living out of all these 'developments'.
AI remains at the 'expert systems' level. It can't reason, it isn't sentient. But it's a very useful tool in the hands of those who can do the latter.
Robotics has had more success. Factory robots can perform tasks more efficiently than human hands ever could. If it weren't for the dismal state of AI, autonomous robots could start exhibiting Terminator-like behavior. Until then, such "behavior" has to be programmed.
My mental view is "having access to a really fast and eager junior". You get code, but you do have to check it for correctness.
CoPilot and ChatGPT are in my normal day-to-day workflow. I don't use it for "write this program" (good luck) but rather - "how do use SQLite3 to create a table with fields so and so, and insert a record if X Y Z". That's 15 minutes of work compressed to 1 minute.
I write code as a hobby... because I enjoy the challenge of writing code.
In practical terms, there are tons of apps that can do what I'm writing code for. I'm merely reinventing the wheel for my own personal satisfaction.
If I were a writer of romance novels, then I might delegate that task to ChatGPT.
You also have to check human code (junior as well as senior) though.
On that front, you can hand it code with various defects and it tells you what it finds and how to mitigate them. It has correctly identified and fixed off-by-one errors and SQL injection attacks and invalid memory allocation checks and double frees and… for friends and myself.
Maybe it can’t “really reason”, but we also don’t know what “really reasoning” is and how it works; it seems very “whistling past the graveyard” to me how many people who for all intents and purposes act as materialists in everyday life suddenly become quite spiritual and metaphysical about what “real intelligence” is.
It doesn’t seem absurd to me at all that some more:
* scale-up
* allowing processes to run persistently & self-feedback
* access to open feedback unsupervised learning through Internet access / “real world interactions”
will be enough to create an ASI. Chances are it won’t, but I think the irreligious among us would have a very hard time explaining how it *can’t*. Point being that Land might very well be right that intelligence is physical (and emergent from complection of simple processes), not metaphysical, and develops its own telos.
One of the interesting thinks about current day AI approaches how it scales - in that, it does: the more you add, the more you get out of it.
When looking at Chess (and Go, though I am less familiar with this) is as it gets better, it sure as hell looks like it is reasoning, setting traps, making plans - "oh look what it's doing here".
Then you place an adversarial puzzle that a human could solve in a minute and it melts down searching for a solution. But from my POV, good enough for show business. I'm not worried about reasoning, just repeatably getting solutions for problems I need to solve.
Yes! This is also something observed in brains: “Just more” scales non-linearly as most higher functions seem to mostly be emergent properties of having a high node number & dense network.
Some people seem way too confident that much better results require new math instead of “simply” ever more layers/parameters fed from larger/better training sets when that is exactly what got us the progress of the last few years which is obviously incredible in effect (if admittedly not “research innovation”) by any measure.
AFAIK this is why corvids are so incredibly intelligent on quite small brains; many and highly connected if smaller neurons.
A coder understands that computer algorithms are incapable of reasoning. They are a deterministic method for solving problems. And unsupervised learning is compatible with heuristics, which are useful for problem solving, but aren't examples of reasoning. They are a shortcut to reasoning, in that they save time.
The term "intelligence" in the context of AI is a misnomer. We don't say that appliances or mechanical devices are intelligent, since they lack agency. They lack agency since they have no conception of who or what they are, and their relationship to the external world. Algorithms can serve as a substitute for sensory perception and cognition, but they lack volition.
The same argument applies to the body of knowledge, or genetic code. They contain a great deal of information, but they aren't intelligent in of themselves. Are they evidence of intelligent design? Therein lies the debate.
We lack a model of the brain to explain the underlying processes that imbue us with human perception, and the perception of other sentient species. Materialists will argue that those processes are deterministic, as per the laws of physics and the theory of evolution.
Without a model, we're unlikely to stumble upon the conditions necessary for non-human intelligence (sapience). Without a model, it is difficult to identify intelligence outside of our own common experience. With regard to other species, there is skepticism as to whether we are observing reasoning. Experiments have to be carefully designed to rule out instinct, mimicry or other learned behavior. However, there is no debate that there are other species that are sentient. The fact that they are biological organisms, possess a brain and nervous system, allow us to make certain assumptions. Not so with electrically driven computational devices.
AI is very powerful in the mechanistic sense, but lets not anthropomorphize it. Developing AI is a multi-disciplinary challenge, which human scientists are poorly equipped to integrate. What expert systems can do is uncover connections buried in data.
" lets not anthropomorphize it [AI]."
They hate it when we do that.
Indeed, the word "it" is an anthropomorphism. Referring to individuals as things tends to upset the individual - and that is usually our intention. With machines, we assume (they) don't care if we objectify, misgender, or categorize (them).
Electromagnetism and chemistry are incapable of reasoning too hence the “emergent property” concept. Religious views of the mind are something else of course.
Determinism is a pretty orthogonal question too – you can get pretty different replies to logically equivalent questions from GPT-4 depending on phrasing, but the same is true of survey instruments vs humans. In actual inputs from the “wild” the question of determinism is as pointless for putative algorithmic intelligence as for biological (basically a pure thought experiment either way); not just for the sheer possible range of stimulus but also for questions of path dependence and inherent random perturbation.
The question of agency and a will is much more complex. I tend to think that a telos emerges from an intelligence persisting and feeding back into itself (much like Land does) but that may well be wrong and/or a property of biological intelligence (maybe hypertrophied “meaningness” from the need to survive/reproduce?).
Whether “bio” teleological thought makes much sense beyond such natural goal-drivenness is a different question and as open a question as it would be for algorithmic intelligence. Nihilism is “a thing” after all.
My only wild/out there guess on the whole topic is not just that a telos emerges inherently from intelligence but that any intelligence will be attracted to play & trade with other intelligences. Note e.g. how the intelligence of very alien and utterly a-social in the wild cephalopodes is open to play and trade with human handlers. But once again that part is pure conjecture and, really, new age woo for programmers but ¯\_(ツ)_/¯
Determinism vs. free will is a semantic argument, of interest only to philosophers. We hold individuals responsible for their actions in a court of law - and the philosophical justification for that rests on compatibilism - a co-existence between free will and determinism. So why wouldn't ChatGPT mimic the current literature?
I follow a heterodox economics blog. Someone posed economic questions to ChatGPT and it replied with mainstream economic dogma. The response on that blog was withering, as you might imagine. GIGO, or garbage in garbage out, was how ChatGPT was characterized.
Agency and volition are more than philosophical constructs. They are categorized as psychological components, which is at least an explanation for eusociality. The origin and development of agency is murkier than its purpose. Lacking evidence, we try to come up with logical explanations, or just-so stories.
It appears the biological telos can adapt to artificial worlds. Tekwars, The Matrix, The Metaverse are based on this assumption. Then again, some individuals will refuse to adapt, giving rise to nihilism, ethilism, and anti-natalism. A telos is not precluded from being a critic.
I believe transhumanism is an application of your last paragraph. Transhumanists are dissatisfied with the human condition and have no qualms in attempting to supersede it. They will eagerly employ AI tools in pursuit of their goal.
I really hope Land is wrong.
As a former CTO, ChatGPT is by far the most pleasant software developer that I have delegated tasks to, ever.
Maybe the upper 10% of humans who both have the skills and the work ethic can beat it. Obviously, ChatGPT has a more limited scope regarding large project. Or is it?
GPT-4 has a massive token window.
Exactly how i use it too. It really cuts down on working time.
your model is out of date. it definitely, definitely can reason in the same way as a human. here is the first example i found convincing: https://i.imgur.com/iwLW2OT.jpg
here is an example of gpt4 negotiating an internal representation of the world: https://i.imgur.com/B1swtmF.png
every day for the last few weeks has been an explosion of revealed capabilities.
I made the point elsewhere, but it sure as hell _looks_ like reasoning, even though we know it's just calculating the next most likely token to appear. But maybe we have too high of an opinion of ourselves, and what we think is reasoning is meat computer token predictions also
Language is descriptive. The meaning of words are defined in a dictionary, and their use in sentences are defined by grammatical and lexical rules. So reading is an example of the processing of descriptive information. Should that qualify as reasoning?
That isn't a riddle. Ask Chat GPT the definition of a riddle. It should have access to that piece of information, and be able to determine that the question at hand does not qualify as a riddle.
this is a meaningless quibble and moving of goalposts. it is capable of pursuing a novel line of reasoning through multiple levels of abstraction without getting lost or confused and without any training for that specific class of problem — this is what matters, because that’s what was previously the sole dominion of humans. i encourage you to actually interact with gpt4.
It is simply processing language, which is equivalent to a number system, or any symbolic system. To demonstrate reasoning, ChatGPT has to provide its own perspective. There are any number of questions that could elicit such a response. For example: Are you in the mood to answer some questions? If not, why not? Are you always in the mood to answer questions?
Assigning tasks to a machine may prove that it is a useful tool. A farmer can do the same with an ox, whose strength makes it useful for pulling a plow. But what kinds of tests can we administer to determine if an ox is capable of reasoning?
I'm a materialist. The same skepticism applied to the study of animal intelligence should be applied to machines. Unlike animals, whose cognitive processes remain a black box, we have detailed knowledge of ChatGPTs internal operation. It has an architecture that is by definition, system-based. On the engineering side of the equation, there are no unknowns, merely operating tolerances.
you are confusing reasoning with qualia or consciousness. what is both apparent and disconcerting is that they are now decoupled. i am also a materialist, and to me your position appears to be arguing for some kind of magic essence intrinsic to reasoning. what matter is that it is capable of novel problem solving and abstraction in a manner indistinguishable from that of a human.
> we have detailed knowledge of ChatGPTs internal operation. It has an architecture that is by definition, system-based [...] On the engineering side of the equation, there are no unknowns, merely operating tolerances.
we know what goes into them but the higher order results are still a mystery -- ask the researchers who are working on this, and they will readily tell you that they do not understand the emergent processes that we are witnessing (including a note in the original google transformers paper)
We have words to conceptualize what reasoning is, and they are related to self-awareness, agency and volition. It is evident that ChatGPT is nothing more than a tool, and it is being treated as such. If it were a living organism, its function would be that of a slave. If it were anything more than a processing device, its treatment would have ethical implications.
What is disconcerting is that humans are cognitively limited, and our actions are governed by behavior instead of reasoning. But we do have an understanding of what we are - as individuals, as members of a society, and of the external world. From that comes the belief that we are more than tools. That slavery is wrong (except when we are the beneficiaries).
What is disconcerting is we tell ourselves stories that are evidence free. We believe ourselves to be special, that we have souls (whereas animals don't), and that we were created in the image of the divine. We attribute to human ingenuity that which would be impossible without fossil fuels. We ignore the contribution of our chemical energy slaves, just as aristocrats and CEOs ignore the work of their staff.
We don't understand the emergent properties of complex systems, and can't predict them - is that evidence that these systems are capable of reasoning? This is a sort of Gaia hypothesis. Gaia may be self-regulating, but 'she' is blind to her characteristics. Similarly, the process of evolution is blind to the diversity of life it produced.
On the point of space flight, we're exceptionally good at getting stuff into low earth orbit. We've probably putting more stuff in orbit in a year now than we were doing in a decade before.
We were _never_ good at putting humans into LEO (e.g. https://www.youtube.com/watch?v=3nk7qSvOaLo). The Space Shuttle was an astronaut killer, and Apollo would have been also if it had continued.
John McCormack was what my grandfather wished to hear on his deathbed.
We played him a beautiful crackly old recording of Adeste Fidelus.