AI & Musicians/Writers

-mjk-

Moderator
Staff member
Joined
Oct 14, 2018
Messages
3,824
Karma
2,673
From
Hukou Township, Hsinchu County, Taiwan
Website
phoenixmediaforge.com
Gear owned
DP-32, | 2A Mixer, A3440
Several of us have been having offline discussions about AI and I thought that maybe we should bring some of these comments into a public discussion.

The title refers to musicians and writers but there are other aspects that maybe haven't been given enough thought. This very site and others like it are directly threatened by AI and as a site owner myself (another site, not this one) I am also affected by it.

Webmasters submit sites to Google and all the other search engines, voluntarily. Google crawls the sites and adds the contents of the sites to the Google database. Google then serves up the results alongside ads, to the public when inquiries are made. The results are links to the content on various sites, according to relevance.

That is, until now. Google (and others) are now serving up an AI summary of the results and placing the summary at the top of the results page. Very convenient, for sure. But also devastating for site owners.

People can now get answers to their questions without ever interacting directly with the site that provided the answer.

Active participants of this forum know that once someone gets the answer to a question they often disappear without a word. No acknowledgement, thanks for update to confirm whether or not the proposed solutions worked. But now, people don't even have to post the question if it has been addressed somewhere on the internet. Think about what this will do to the quality of information if the interactions stop.

Additionally, AI answers are often wrong. Let's face it - there are a lot of wrong answers out there. But we can read the threads and figure out what works and what doesn't (for the most part). But AI just copies it all and often draws the wrong conclusion.

This is particularly disturbing because (for example) if you went on Youtube and said something negative about vaccines, your video would be taken down for violating policies, policies that use terms such as "egregious harm" to justify censorship. Yet, you can get an AI summary with zero accountability and the summary could contain very dangerous information which could actually lead to "egregious harm" and yet the tech corporations are not liable to anyone for the consequences resulting from the use of the AI provided information. Nice, huh?

So here we have The Tascam Forums, the biggest non-official Tascam forum in the world. My own site is likewise, the biggest non-official forum in the world for another equipment brand. Yet, 100% of the information from both sites is not only stored in the search engines, it is now being served up to people who don't visit the sites. This information is copyrighted and when people do not visit the websites, the ads are not served and the ad revenue is not earned by the visitor to the site. This is a major conflict of interest! Let me emphasize this:

Google pays site owners ad revenue for serving up ads on their sites to keep the sites running. Google keeps people from visiting those sites by using AI to summarize the search results so that visitors don't need to visit the site to get the information.
 
Last edited:
I'll add this one to the pot.
How can we ever rely on automated responses?
Granted you cannot be 100% sure with a human response, but at least there's a presumption that the person has actually witnessed or experienced the situation themselves so is speaking with some authority, and you can always ask a follow-up question to confirm this... but what about an automated response derived statistically from arbitrary data?
Here's Google's Gemini chatbot response to the question how to go into record mode on a Tascam DP-24 machine.
GeminiTascam2.jpg
It's easy to spot the similarities between the underlined incorrect parts and the myths and folklore which have been bandied about on social media over the years. It may be a trivial example, but mis-information will never be corrected unless someone actually spots it and puts it right.
 
In the US, the Communications Decency Act generally protects free speech over electronic communications, with the exception of communications in furtherance of criminal activity, or infringement of intellectual property. (This right derives from use of interstate communications infrastucture deemed to serve the public interest).

Section 230(c)(1) of the Act says "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." So Google, Facebook, et. al. cannot currently be held civilly liable for content they provide that has been sourced from third parties. That also protects from liability users of the computer service (those who post or share).

According to news reports and public statements by incoming US federal officials, there will be an attempt to change Section 230(c)(1) to hold content providers civilly liable for curating third party information placed on their servers. The theory seems to be that private parties regulated by the government through the Communications Decency Act violate the First Amendment by curating the content they offer (no matter how false, misleading, or crazy).

Should that come to pass, it will be interesting to see how it impacts AI content. Will Google some day be held liable for not curating AI information that causes financial, economic, or physical harm?

That also begs the question: What is a "publisher of information"; and how does the provider of an interactive computer service differ or conform to that definition.

The term "publisher" in typical use would likely call to mind a book publisher, a newspaper publisher, a publisher of sheet music, etc. However, Cornell Law School's Legal Information Institute provides this broad definition of the term "publish": "to give publicity to a work; to make a work available to the public in physical or electronic form; to circulate or distribute a work to the general public."

Here are some points to mull over when applying "publishing" to AI:
1. Private entities are not bound by the US First Amendment, which applies only to government, and in some cases, to private entities regulated by government or bound contractually to government.

2. A publishing business accepts unsolicited and solicited material, and a publisher chooses which material it will publish, taking into account the marketability, quality of writing, etc. of the subject being considered for publication.

3. A publishing business, before publishing, enters into an individual and specific contract with the author of the material, usually including a financial arrangement or the provision of some other "thing of value" (e.g., publication in a professional journal enhances the reputation of the author).

4. A publisher of professional journals will "peer review" all articles to assure the validity and accuracy of the material to be published.

5. A publisher assigns an editor to work with the author to develop the material for publication.

6. A publisher assumes responsibility for the content, printing (hard copy or electronically), marketing and distribution of the final product.

How all this plays out going forward remains to be seen. IMO, unvetted laissez-faire AI is creating problems already that can only be resolved through government legislative intervention in the public interest.

The conundrum is that to hold internet service providers accountable for AI content, they must be allowed to curate that content; but in curating content, they may be guilty of restricting free speech.
 
Last edited:
A very interesting post about AI concerning sites like this Tascam forum. I think Phil's post sums it up with his example. If it was not for you guys on this forum I think many people would not have had their questions expertly answered, myself included. 👍
 
I'mma 100% on the same side of this fence as MJ, in particular as to the harm AI does to genuine human input/interaction, and the well-being of sites meant to accommodate/enable it.
And PhilT's example is a PERFECT example of why AI is years - maybe generations - away from being of actual benefit.
And it is my profoundly important :rolleyes: opinion that MarkR is EXACTLY right - people are using AI to generate info/"answers" (for profit) with NO RESPONSIBILITY for any problems/damage/harm it may cause, let alone the viability of the info given.

Which leads me around to my overall/widescreen opinion of AI (at least at this point in it's "evolution" - or DEVOLUTION, depending on your perspective):
Like a great many technological advancements before it: it's a brilliant idea, and could be put to use accomplishing extraordinary things and be an immeasurable benefit to mankind.
But it WON'T: because the driving motivation in its' development and implementation is not knowledge, nor education, nor breaking new ground or overcoming barriers/obstacles, nor benefit to mankind...

It's MONEY.
And (again, my profound opinion): it doesn't matter how destructive, worthless, illegal, immoral, exploitative, disgusting, distasteful, or just plain WRONG something is - if there's money in it, SOMEone's going to do it.
That's the world we live in.
"Better Living Through Technology" my AZZ. Now go watch "Terminator" and tell me I'm wrong.🤔
 
Last edited:
Just watched a video about Youtube copyright strikes with AI generated music, and how you could get yourself in a knot if someone feeds your music into an AI generator and then into the Content ID system.
It's a bit long, but there's a summary at about 18mins here.
All very worrying for composers.
 
  • Like
Reactions: -mjk-
Yes @Phil Tipping and that is happening now. People purposefully hijacking other's music on Youtube and other platforms by using AI. Governments are slow to react to things like this.
 
  • Like
Reactions: Phil Tipping
Section 230(c)(1) of the Act says "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." So Google, Facebook, et. al. cannot currently be held civilly liable for content they provide that has been sourced from third parties. That also protects from liability users of the computer service (those who post or share).
In 1994 I covered this issue with Shelly Steele from the EFF on my national talk show (at the time). As soon as a platform starts censoring it's user's posts, it becomes a publisher instead of a platform. The principle is simple: If the platform keeps out of it and is ignorant of what's being posted then they are not liable the content they host. As soon as Youtube and others started censoring content for whatever reason, they made themselves legally liable for the content on the entire site. Except that the US Congress said otherwise.

The entire field needs an overhaul but there isn't a single person or even an agency that understands the big picture.
 

New posts

New threads

Members online

No members online now.