• 4 Posts
  • 249 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • you often need to buy it from other countries. For instance, Russia. Not great.

    Yeeeeah, I wouldn’t worry about that. Sure we (Australia) are conservative with our fears of mining and exporting uranium, especially with the Cold War and reactor whoopsies around the world. But historically it doesn’t take much for us to go down on an ally.

    Just let us finish unloading all our coal off to the worst polluting nations first, then we’ll crack the top-shelf stuff.








  • saltesc@lemmy.world
    cake
    OPtoMemes@lemmy.mlEasier said than done
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    Oh, I’ve never looked into it, I just noticed it sometimes. I don’t say anything harmful or nasty, just unpopular so I expect downvote burial even before I hit the post button haha. I figure that’s how it’s always meant to work. Downvotes handle dipshit remarks, mods handle malicious ones. But seems entire conversations with multiple people get removed because, despite all the positive upvotes and people involved in a good ol’ fashion discussion, a mod has a different personal opinion and it all goes. Even the off-hand comments connected to that thread.










  • saltesc@lemmy.world
    cake
    toMemes@lemmy.mlMath
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    13 days ago

    That’s a tough question in analytics lol

    You mean mathematical examples? Or like examples of analytical outcomes? Keeping in mind the more analytics-heavy, the more it involves lots of sources, patterns, variables, and scenarios, but I could provide just a single example.

    Edit: Oh, wait. If you’re referring to just averages… In forecasting I prefer, as a minimum, to do weighted averaging. This is where I’ll have a certain time period of cumulated historical data that provides a more stable base, however more weight is applied the more recent (relevant) the data is. This shows a more realistic average than a single snapshot of data that could be an outlier.

    But speaking of outliers, I’d prefer to also apply weight to outlying data points that may skew the output, especially if sample size is low. Like 1, 2, 2, 76, 3, 2. That 76 obviously skews the “average”.

    Above that, depending on what’s required, I’ll use a proper method. Like if someone wants to know on average how many trucks they need a day, I’ll utilise Poisson instead to get the number of trucks they need each day to meet service requirements, including acceptable queuing, during the day. Like how the popular Erlang formulas utilise Poisson distribution and can kind of handle 90% of BAU S&D loading in day to day operations with a couple clicks.

    That’s a basic example, but as data cleanliness increases, those better steps can be taken. Could be like 25 average last Wed vs. 20 weighted average over last month vs. 16 actually needed if optimised correctly.

    Oh, and if there’s data on each truck’s mileage, capacity, availability, traffic density in areas over the day, etc…obbioisly it can be even more optimised. Though I’d only go that far if things were consistent/routine. Script it, automate it, set and forget and have the day’s forecast appear in the warehouse each morning.

    And yet such simple things are often incredibly hard to get done because of poor data governance or systems.