Screenwriting : Ai feedback by Robert Bridge

Robert Bridge

Ai feedback

Anyone tried AI script feedback and was it worth it? Better or worse than human analysis???

Miquiel Banks

They complement each other.....

Göran Johansson

The only good feedback is to be told what to change, and preferably how. AI does this poorly. So I prefer human comments. AI feedback on my latest script has not helped me, but human comments have helped me.

CJ Walley

Depends on what kind of analysis you're making.

It's amazing at breaking down the objective elements, because that's just processing data. You're asking a robot to do a robotic tasks.

The thing is, most of a script's value is held within subjective elements, which AI can't reflect on without emotion or opinion, and it tends to be programmed to blow hot hair up your ass.

It's better to just stop believing in an objective good and bad.

Where it gets murky is with financial stuff. People are desperately trying to find what statistically makes the most profit and build systems that predict the money a script will make.

Elle Bolan

It's okay for the initial draft phases if you're new to writing. It can point out structural issues before you get too deep into the draft. The only thing I've found to be very useful for me is actually on scriptreader AI - and it's called the scriptoscope. It breaks your script down by act, scene etc there are categories for different parts of your script: overall, conflict, etc. it's not foolproof because AI misses subtext. But it gives you places to double check. Example image attached.

There is another scene breakdown section that could be useful in early drafts. That chart is huge so I can't provide an image, but it's similar to this in a linear chart versus the donut. That one scores each scene across several categories and rates it between 1 and 10.

Both graphs operate on a green is good, yellow needs a look, red is a warning. That being said, not every scene should be green across all categories - the overall score matters the most.

At the end of the day, it's just like any other feedback. It's for you to consider, not base your whole revisions on. You have to keep in mind that human eyes are best. But for your rough drafts? Sure. It can be somewhat useful as long as you can sift through algorithm noise.

Jim Boston

Robert, I've been using ScriptReader.ai and something from WriterDuet, ScreenplayIQ.

I got into both tools out of curiosity...just to see if what I've written stands up alongside other people's scripts (especially those that actually became movies and TV shows).

They've helped me strengthen written pitches in that ScreenplayIQ and ScriptReader help me identify mood, themes, and character arcs. The two tools help me see things I don't always think about when I'm actually writing a script.

Still...the two tools don't have the final word at all.

Feedback from Stage 32/Script Revolution members is where it's at for me...the biggest factor. If a script I've come up with resonates with people reading it, I'm happy.

Pierre Lapointe

My experience with coverage has been spotty. The same script is critiqued differently by different individuals. That's the problem with human feedback as every reader has his/her biases, Some may love a story, the characters, the dialogue, etc, while another may feel the story is not well developed, the characters cliche, etc.. So, coverage in and of itself can be useful, but is inherently flawed.

With that in mind, I decided to give AI a shot with both services like Callaia.com and Google Gemini directly using my own prompts. Callaia charges around $79 while Gemini is basically free.

The nice thing about AI is that the response is immediate (within seconds to minutes). No need to wait for weeks to get a critique while you toy with the idea of making revisions.

I submitted a pilot episode to a serialized show to Callaia and the amount of feedback was impressive though often repetitive. It was obvious to me that it processed a 60 page series pilot episode and a 120 page feature using the same methodology which IMO is problematic. The feedback was interesting, covered more ground than standard coverage and pointed to some areas, but at the same time, it clearly could not wrap its head around the idea that a pilot is the starting point of an extended storyline.

It made some assumptions which were completely incorrect; claiming characters said or did something that was not in the script, believed some characters were leads even though they were only a small part of a significant scene. I've noticed that it assumes the first character introduced is the lead regardless of how this character interacts in the rest of the story. These are typical of AI and appropriately called hallucinations which if taken at face value can be extremely misleading.

I also devised my own lengthy prompts and submitted a pilot script to Google Gemini. I felt Gemini was much better than Callaia, perhaps because I was succinct in my prompt instructions about it being a pilot episode and the type of feedback I was looking for. You can also ask Gemini to write a synopsis of the script and then work with it to refine it. It's definitely powerful stuff and extremely helpful. It's also always very optimistic which you really have to push through to get to the final results.

All in all, many assumptions are made about the characters, their backstories and their motivations. It has a problem picking up on nuance and subtext. Still, it gives some valuable insights and often hits on accurate points about character, structure, dialog (though it often finds dialog expositional when it isn't). It definitely has its quirks, but provides meaningful observations.

I suggest you use it as a starting point to refine your script and then when you've reached the satisfaction, spend the money on "human" coverage.

Ilanna Mandel

I prefer human critiques. There is nothing that replaces gut instinct, experience, and the understanding of human relations.

Other topics in Screenwriting:

register for stage 32 Register / Log In