JFC’s perspective on the dispute over the use of AI voice

Mirosław Majewski, prezes JFC Polska

– I believe that the development of technology is ahead of current regulations. We need clear regulations and market standards that, on the one hand, will protect the rights of creators, and on the other hand, will not block innovation or harm companies that use AI in accordance with regulations and law, says Mirosław Majewski, president of JFC Polska.

The case of using the voice in the JFC advertisement became widely commented on and was described as a precedent. How do you – from the management’s perspective – see the essence of this dispute?

From the management’s perspective, the essence of the dispute comes down to the question whether the voice of a specific person was used in advertising materials created using AI tools. We consistently emphasize that there has been no conscious or unlawful violation of anyone else’s rights. The case is of a technical nature and requires specialist knowledge – that is why the court admitted evidence from an expert in the field of phonoscopy. This is not an “obvious” issue, as it is presented in some publications, but it is an issue that requires reliable evidentiary analysis.

Questions have arisen in the public space about how the disputed material was created. Can you describe the process of preparing it today?

The advertising material was created in a standard marketing process, using AI tools available on the market. Our company did not use the plaintiff’s personal data or his image within the legal meaning of the law. There was no campaign in which we supported the information that it was allegedly read by a famous lecturer. The goal was to prepare a product message, not to identify or build associations with a specific person. I emphasize that all activities were carried out in the belief that they were legal and consistent with applicable regulations. We intend to prove it in court and defend the company’s good name.

There was a thread about the availability of voice-overs in public spaces and AI models learning from them, is this true?

There are indeed recordings of famous voice-overs in the public space – in advertisements, the media, on YouTube and Facebook. According to their regulations, the right to vote and train artificial intelligence is held by such platforms. This means that by posting their voice samples there, the lecturers agree to it. Mikrofonika itself admitted that it posted the lecturers’ voices on the Internet.

At the same time, there is an ongoing industry discussion about how AI models are trained and what data is used. In this particular case, however, the key question is whether our materials can be traced to the use of the plaintiff’s voice in violation of his rights. This requires a specialized phonoscopic assessment and that is why the court appointed an expert.

Have you tried to resolve the matter amicably?

Yes, several times. In response to the lawsuit, we submitted a request to refer the parties to mediation and sent e-mails proposing to start settlement talks. The company immediately stopped publishing the indicated materials after receiving the first information about the claims. We believe that matters of such a complex technological and legal nature should first be analyzed calmly, substantively, without media pressure. We are open to dialogue and amicable solutions, as long as they are based on facts and a reliable analysis of evidence.

In our opinion, the whole matter is only intended to promote Mikrofonika and its services. Mikrofonika did not want to talk to us, and in the trial she represented three people. She was indicated in the lawsuit as the lecturer’s representative, so she should have the client’s best interests in mind, as a witness – in this role she should be objective, and as an entity representing her own interests. It is significant that all these actions were taken with full awareness that, according to the law, Mikrofonika cannot act as a legal representative in this case, which the court confirmed at the first hearing. However, this does not change the fact that, as the representative indicated in the letters, Mikrofonika had full access to the trial materials and then gave evidence as a witness, which in our opinion is unacceptable. In our opinion, the rate given by Mikrofonika in its demands is several times higher than what would normally be paid for a voice-over for advertising. It was Mikrofonika who inflated the issue and it was Mikrofonika who used pejorative terms that harmed not only us but also the narrator himself.

During the first hearing, the court admitted evidence from an expert in the field of phonoscopy, finding that the case required specialist knowledge… why?

Because assessing the similarity of voices, their identifiability or possible “recognition” by an average recipient is not a matter of intuition, but of specialist knowledge. A phonoskop is an expert who analyzes acoustic parameters, timbre, modulation and other technical features. Since the court decided that the case cannot be resolved without such an opinion, it means that we are not dealing with a clear or obvious situation.

How do you see this issue – where does technology end and the entrepreneur’s responsibility begin?

Technology is a tool. The entrepreneur’s responsibility is to use it lawfully and in good faith. We acted in the belief that we were using solutions available on the market legally. At the same time, I believe that with the development of AI, the need for clear standards increases – so that entrepreneurs have clear guidelines and creators have real protection of their rights.

Does the industry need more precise regulations regarding synthetic voices and deepfakes today?

Yes, I believe that technology development is ahead of current regulations. We need clear regulations and market standards that, on the one hand, will protect the rights of creators and, on the other hand, will not block innovation or harm companies that use AI in accordance with regulations and the law. These types of cases show that the law must keep up with technology to avoid interpretative uncertainty.

Has this situation prompted you to review your internal procedures for creating marketing materials?

Each controversial situation is an impulse to analyze and verify procedures. We have reviewed the processes related to the preparation of marketing materials and implemented new company regulations and the code to ensure that they are as transparent as possible and consistent with current legal standards. We are not a company without reflection, this is an element of responsible management.

How does the company react to media publicity surrounding the trial? Have you felt its impact on your relationships with partners or customers?

We respond calmly and substantively. We have issued official statements in which we emphasize that the case is ongoing and it has not been decided who is guilty. We do not agree to the dissemination of false information, in particular terms suggesting “vote theft”. Unfortunately, in the media, some people have already issued a verdict and judgment in the case. Fortunately, our business partners know us as a company that has been operating on the international market for almost 30 years and they approach the matter with understanding, waiting for the court’s decision.

What lesson – as a manager and entrepreneur – do you take away from this situation?

First of all, the belief that in the era of new technologies, even greater caution, transparency and dialogue are needed. AI opens up enormous opportunities, but at the same time creates new legal and image challenges. As an entrepreneur, I conclude from this situation that, in addition to innovation, it is equally important to build clear rules and legal frameworks.

Similar Posts