Hot100.ai


@neilswmurray Flambo runs on gpt-4o-mini with a low temperature (0.25), so the scoring stays consistent.

Every project gets evaluated with the same structured prompt.

It looks at a few things:

  • what the project does and how it’s described

  • the problem it’s solving

  • the tools used to build it

  • and whether there’s a live product to try

It doesn’t dig through repos or do anything heavy like that. It’s judging based on what’s submitted and how well the story and the end result line up. The scoring is two parts:

  • Innovation — is this bringing something new or interesting?

  • Utility — is it actually useful, clear in purpose, and understandable?

Both are scored on a 1.0–10.0 scale.

There are also some light adjustments. For example:

  • small bonus if it’s live and easy to try
    Also bonus if the project has been security checked by the builder.

  • small penalty if it’s just a waitlist or extremely vague

Final score is simply the average of Innovation and Utility, rounded to one decimal place.

For the chart, Flambo’s score is the main signal. Human votes are there too, but they act more like momentum than the deciding factor. But they can and will swing scores that are tied etc, tbh I’ve tweaked the scoring model a few times in Beta. Expect to keep doing that when appropriate.

And for now, I’m still reviewing every project myself. Just keeping an eye on quality and making sure the whole thing feels right as it grows.

Appreciate the question!



Source link