The Computer Audiophile Posted May 29, 2023 Share Posted May 29, 2023 2 hours ago, NOMBEDES said: When did it all start to go wrong? My theory: MTV destroyed human ability to concentrate. Music Video directors produced videos with quicker and quicker cuts. Viewers could barely grasp one image before the next one appeared. Overexposure to this technique must have done some damage to the still developing brains of young viewers. How many people can "deep read" today. Congress people don't read the bills they vote on. How many people finished "Infinite Jest". How do we explain the division and hate that grips America today? What happened to respectful political discourse? When they killed the radio star, they may have killed something much more precious. I was raised on MTV :~) PYP 1 Founder of Audiophile Style | My Audio Systems Link to comment
AudioDoctor Posted May 29, 2023 Share Posted May 29, 2023 1 hour ago, The Computer Audiophile said: I was raised on MTV :~) That's clearly why this website is such a failure... ;-) The Computer Audiophile 1 No electron left behind. Link to comment
AnotherSpin Posted May 30, 2023 Share Posted May 30, 2023 8 hours ago, The Computer Audiophile said: I was raised on MTV :~) I grew up in a country of chronic scarcity of all consumer products, including music on LPs, CDs, or broadcasts. Strangely enough, MTV was already available five or six years before the collapse of the USSR. Link to comment
PYP Posted May 30, 2023 Share Posted May 30, 2023 News item: A group of industry leaders warned on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field. (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.) Grimm Audio MU1 > Mola Mola Tambaqui > Mola Mola Kaluga > B&W 803 D3 Cables: Kubala-Sosna Power management: Shunyata Room: Vicoustics “Nature is pleased with simplicity.” Isaac Newton "As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man...they must be ranked among the most mysterious with which he is endowed." Charles Darwin - The Descent of Man Link to comment
PYP Posted May 30, 2023 Share Posted May 30, 2023 Perhaps the large language models are not the right way to construct these programs, but don't throw out the BabyLM with the LLM bathwater... Those who cannot do, teach. Those who cannot teach start academic challenges. 😝 Just kidding -- have enormous respect for those who try to better understand the human learning process. News item: Large language models like ChatGPT and Bard, which generate conversational, original text, improve as they are fed more data. Every day, bloggers take to the internet to explain how the latest advances — an app that summarizes articles, A.I.-generated podcasts, a fine-tuned model that can answer any question related to professional basketball — will “change everything.” But making bigger and more capable A.I. requires processing power that few companies possess, and there is growing concern that a small group, including Google, Meta, OpenAI and Microsoft, will exercise near-total control over the technology. Also, bigger language models are harder to understand. They are often described as “black boxes,” even by the people who design them, and leading figures in the field have expressed unease that A.I.’s goals may ultimately not align with our own. If bigger is better, it is also more opaque and more exclusive. In January, a group of young academics working in natural language processing — the branch of A.I. focused on linguistic understanding — issued a challenge to try to turn this paradigm on its head. The group called for teams to create functional language models using data sets that are less than one-ten-thousandth the size of those used by the most advanced large language models. A successful mini-model would be nearly as capable as the high-end models but much smaller, more accessible and more compatible with humans. The project is called the BabyLM Challenge. OpenAI’s GPT-3, released in 2020, was trained on 200 billion words; DeepMind’s Chinchilla, released in 2022, was trained on a trillion. Together with a half-dozen colleagues, Dr. Wilcox, Dr. Mueller and Dr. Warstadt conceived of the BabyLM Challenge, to try to nudge language models slightly closer to human understanding. In January, they sent out a call for teams to train language models on the same number of words that a 13-year-old human encounters — roughly 100 million. Candidate models would be tested on how well they generated and picked up the nuances of language, and a winner would be declared. fas42 1 Grimm Audio MU1 > Mola Mola Tambaqui > Mola Mola Kaluga > B&W 803 D3 Cables: Kubala-Sosna Power management: Shunyata Room: Vicoustics “Nature is pleased with simplicity.” Isaac Newton "As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man...they must be ranked among the most mysterious with which he is endowed." Charles Darwin - The Descent of Man Link to comment
botrytis Posted July 10, 2023 Share Posted July 10, 2023 https://fortune.com/2023/07/10/sarah-silverman-openai-meta-lawsuit-authors-training-material/ Interesting. I wonder if song writers will do the same? Current: Daphile on an AMD A10-9500 with 16 GB RAM DAC - TEAC UD-501 DAC Pre-amp - Rotel RC-1590 Amplification - Benchmark AHB2 amplifier Speakers - Revel M126Be with 2 REL 7/ti subwoofers Cables - Tara Labs RSC Reference and Blue Jean Cable Balanced Interconnects Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now