ChatGPT acts like a devious coworker: ‘It doesn’t own mistakes, but it admits them when you point them out’

  • 📰 MarketWatch
  • ⏱ Reading Time:
  • 117 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 50%
  • Publisher: 97%

Law Law Headlines News

Law Law Latest News,Law Law Headlines

ChatGPT acts like the most devious coworker: ‘It doesn’t own mistakes, but it admits them when you point them out’ A lawyer who used artificial intelligence to write a legal brief was forced to apologize. It’s just the tip of the AI iceberg quantanamo

Roni Rosenfeld said he would never dream of using ChatGPT at work, but he understands how it could be tempting, and — yes — even useful for others. Maybe. Just don’t expect it to be accurate, uphold your company’s policies, or even do exactly what it’s told.

The words included: “Automobile. Good. Wedding. Groom. Relax. Bride. Vacation. Husband. Dress. Ocean.” On each attempt, he asked it to be more humorous, more whimsical, more forward-looking and even something “a little less cheesy,” among other instructions. Rosenfeld employed AI to complete a task, but it took a lot of coaxing, even for a low-risk venture like a love poem. It proved to be a highly efficient slacker, not something or someone he would recommend as an honest candidate, even if it was a highly efficient one.

ChatGPT is not the most reliable assistant. As Rosenfeld discovered: “It’s not trained to give correct answers. It’s trained explicitly to appear helpful and informative, to give answers that people would like. It’s not surprising that it’s become very good at being impressive.” “‘It’s a crisis that people have very little faith in the authenticity or humanity of their professional field.’”

That’s a big statement, but he breaks it down, giving this example: a “phenomenon” is that fast-food menus are more unhealthy than, say, 30 years ago — according to some studies — but an “epiphenomenon” is that bad diets are associated with a higher risk of diabetes. Older forms of AI have been with us for several decades, including recommendation systems, face recognition, and candidate ranking, etc. “These older forms of AI don’t have the ‘look and feel’ of human interaction, and don’t create the impression of truly human-like intelligence,” he said.

However, I believe that my current salary does not adequately reflect my level of experience or the additional hours I have willingly invested in my work. Often, I find myself working late into the night. I kindly request a meeting with you to discuss my professional progress and compensation. I am more than willing to provide insight from my own experiences. Together, we can explore ways to address these pressures and find a mutually satisfactory resolution.

— Luis A.N. Amaral, a professor at Northwestern University in Evanston, Ill. Should a financial adviser use AI to pick stocks? “No! You should not do that,” Amaral said. “You’re throwing dice on which stocks to buy. It’s not going to be aggregating reliable information, and it’s not going to create documents that have an understanding of the world.”

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 3. in LAW

Law Law Latest News, Law Law Headlines