K2-18b: did JWST really find evidence of life on this exoplanet?
  • maegul maegul 1w ago 100%

    First, no need to apologise.

    Second, no I don’t think you summarised the video, IIRC, it mostly gets into the theory of the techniques used and what can be done to do a better job.

    2
  • Nearly 50% of researchers quit science within a decade, huge study reveals
  • maegul maegul 1w ago 100%

    Possibly, but when scientific knowledge and problems were smaller, one person could actually make a mark alone IMO. And if they happened upon a new discovery or insight then they’d appear to be geniuses, all alone.

    At some point, when the work to make a discovery requires more than one person and the amount of theory involved in understanding its significance is too much for one person to be authoritative on all of it, then it’s a team sport.

    1
  • Nearly 50% of researchers quit science within a decade, huge study reveals
  • maegul maegul 2w ago 100%

    Yep. There’s a whole world of people happy to work very hard on research for the rest of their lives … and instead we have them writing emails wrangling spreadsheets for … ??

    Sometimes “shitty” work needs to be done, obviously … but I think it’s far less obvious that the pool of things that need to be done lies entirely in the random inefficient shit the business world just accepts. Instead, that’s just where the money flows.

    3
  • Nearly 50% of researchers quit science within a decade, huge study reveals
  • maegul maegul 2w ago 100%

    Absolutely. It’s a shit show.

    And interestingly, making the general public more aware of this is likely quite important. Because 1, they have very idealistic views of what research is like, and 2, just about everyone is entering research blind to the realities. It’s a situation that needs some sunlight and rethinking.

    IMO, a root cause is that the heroic genius researcher ideal at the base of the system’s design basically doesn’t really exist any more. Things are just too big and complex now for a single person to be that important. Dismantle that ideal and redesign from scratch.

    17
  • How good is vegan chocolate?
  • maegul maegul 2w ago 100%

    I’ve certainly had some nice vegan chocolate (that’s not dark that is), IIRC, Whittaker’s Oat Milk choc hit the spot. But from my tastings, a good vegan “milk” chocolate is hard to come by.

    For me, I’d gone for dark choc prior to going vegan, so I just stick with that. If it’s dark enough, it’s almost certainly going to be vegan.

    8
  • Beyond enshittification, why does tech oftentimes suck?
  • maegul maegul 3w ago 100%

    There was an article by Google about the security of their code base, and one of their core findings was that old code is good, as it gets refined and more free of bugs over time. And of course conversely, new code is worse.

    https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html

    Generally it seems like capitalism’s obsession with growth is at odds with complex software. It’s basis in property also.

    6
  • afl
    AFL 3w ago
    Jump
    GF predictions anyone?
  • maegul maegul 3w ago 100%

    Milk that's very warm, almost already gone bad, but with added yeast.

    3
  • [CW: animal cruelty] Dogs stacked in shipping containers overnight without water — what vet report reveals about now-shutdown breeder
  • maegul maegul 3w ago 100%

    Yea … I tried to touch on this in the “vegan cat food debate” that happened on lemmy-world in response to all the “animal and pet cruelty” vitriol.

    Apart from ignoring the nuance of the conversation vegans were having, the presumption that the pet industry and all those who have pets are animal lovers and vegans were somehow in contravention of an otherwise pristine endeavour … was problematically naive and ignorant of how much casual pet owners can in fact buttress an industry and practice with plenty of cruelty baked in.

    11
  • Happy 12 million!
  • maegul maegul 4w ago 100%

    It’s not too hard. There are a bunch of different platforms one might experiment with as well as instances. Some will use multiple accounts for different needs or interests. On lemmy, multi accounts are useful for have different feeds, for example. I probably have 7-10. I’ve probably forgotten about a few of them. If you’re curious, it happens.

    5
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLE
    Jump
    Is there a setting for instances to disable voting from Local or All feeds?
  • maegul maegul 4w ago 100%

    Yea. Even nicer if it could be adjusted on a post-by-post basis (however viable that is).

    1
  • Holy Hell, The Social Web Did Not Begin In 2008
  • maegul maegul 4w ago 87%

    By the same token Evan seems a bit self centred and egotistical about his projects. Of you look at his comments about BlueSky it seems he’s pretty bitter that someone dared to make an alternative protocol that so far has a decent amount of users, when a acceptance of multiple systems experimenting and borrowing from each other for the good of the open web is right there as a natural position.

    12
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLE
    Jump
    Is there a setting for instances to disable voting from Local or All feeds?
  • maegul maegul 4w ago 100%

    It’s definitely an interesting and relevant idea I think! A major flaw here is the lack of ability for communities to establish themselves as discrete spaces desperate from the doomscrolling crowd.

    A problem with the fediverse on the whole IMO, as community building is IMO what it should be focusing on.

    Generally decentralisation makes things like this difficult, AFAIU. Lemmy has things like private and local only communities in the works that will get you there. But then discovery becomes a problem which probably requires some additional features too.

    3
  • afl
    AFL 1mo ago
    Jump
    It’s like the 2000s again! Not a single Melb team in the prelims!
  • maegul maegul 4w ago 100%

    Ha yep … when was the last time that happened?

    Don’t tell me it was 06 eagles v swans?!

    Double check … yep. First time in 18 years.

    1
  • en.wikipedia.org

    I’d almost forgotten about this album, rediscovered it today, and fuck I love the vibe and energy.

    4
    0
    What is your favourite open source software that you discovered in the past year, that you can no longer live without?
  • maegul maegul 1mo ago 100%

    it's the sort of tool that is really just fundamental now and should be ubiquitous and promoted and taught and talked about every where there is knowledge work. Even more so as there's a great open source version of the tool.

    13
  • Trams are expensive here
  • maegul maegul 1mo ago 100%

    Oh yea, I hear you.

    What your point does though is open up the discussion about whether enforcement makes financial sense in isolation. And once you open that door, the whole becomes uncomfortable for a lot of people who are stuck in a simple black-and-white justice mentality, where "do what you're supposed, pay what they charge, or be punished" is all there is to making the world work well. You know, "law and order" types.

    You're trying to talk about incentives. For many though that's a very dangerous slippery slope. So I'm trying to get a head of that and wonder if the end of that slippery slop is actually a demonstrably good thing.

    2
  • afl
    AFL 1mo ago
    Jump
    It’s like the 2000s again! Not a single Melb team in the prelims!
  • maegul maegul 1mo ago 100%

    Ha ... somehow I doubt that.

    2
  • afl
    AFL 1mo ago
    Jump
    It’s like the 2000s again! Not a single Melb team in the prelims!
  • maegul maegul 1mo ago 100%

    No idea ... I've tuned out of this season a lot for various reasons ... so I've got no clue!

    2
  • Trams are expensive here
  • maegul maegul 1mo ago 100%

    I remember hearing rumours during the role out that tech employees were found asking for help on forums in ways that weren’t promising for the health and talent of the people building it.

    But yea, it’s the embarrassment of this sort of stuff that must be masking the real financials of PT and how viable a free system would be.

    6
  • Instead we’ve the lions, power, swans and cats. Who were premiers in 2003, 2004, 2005 and 2007 respectively (with a cheeky eagles flag in 2006). edit: Or, to include repeats and losing the grand final: The 01, 02, 03, 04, 05, 07 & 09 premiers, and, the 04, 06, 07 & 08 runners up are in the prelims this year. Not one GF in 9 years that didn’t have one of these four and only two that were win by another team.

    11
    9
    Trams are expensive here
  • maegul maegul 1mo ago 100%

    Yea I’ve kept track of how often I’ve encountered inspectors, and most of the time it’d be worth it to not get the ticket or not tap on. Sometimes though I’ve noticed an increase in the number of inspectors that would definitely shift the equation. Also train stations with gates complicate the matter.

    I don’t know if it’s out there, but I’d personally like to know how the finances come out for making PT free. You obviously lose revenue, but also all the overhead of paying for inspectors and for all of the ticketing infrastructure. I also wonder if the part that makes the finances work is all the fines collected, which would be pretty fucking shithouse if true.

    3
  • Is Bluesky (going to be) as community-run as Mastodon?
  • maegul maegul 1mo ago 100%

    The catch is that the whole system is effectively centralised on BlueSky backend services (basically the relay). So while the protocol may be standardised and open, and interpreted with decentralised components, they’ll control the core service. Which means they can unilaterally decide to introduce profitable things like ads and charging for features.

    The promise of the system though is that it provides for various levels of independence that can all connect to each other, so people with different needs and capabilities can all find their spot in the ecosystem. Whether that happens is a big question. Generally I’d say I’m optimistic about the ideas and architecture, but unsure about whether the community around it will get it to what I think it should be.

    15
  • How are people feeling about it? I was disappointed by season 1, but happy to keep watching as I'm a die hard fan from childhood. Season 2 had me excited *at first* ... ::: spoiler spoilers (and ranting) The first two-three episodes at least had me even a little pumped. The dark wizard in the east very much signals to me that the stranger could be a blue wizard, along with the dark wizard, which is honestly very cool and a nice way to split the difference around Tolkien's "speculation" on what happened to them. Getting more complex Sauron manipulation and moving the plot along too seemed nice. But after episode 4, I don't know. I came away from it thinking it might have been the worst tv episode I've watched since Picard S2, which was very strange given how much interesting shit they did. Ents, Bombadil, Wizards, Hobbit origins (actually I don't care for the amount of hobbit stuff in the show at all). But there was something just boring about it all for me. The only way I can explain what I think I'm seeing, *and why it's fundamentally flawed*, is that the writers/directors want to take Tolkien seriously and even feel rather pressured to do so ... and so in many ways they're actually writing/filming that sense of seriousness rather than a well thought out adaptation style. The clue for me is how the whole show is at once strangely grounded *and* somehow "elevated" at the same time. The elves, such as Galadriel and Elrond, are kinda normal people doing normal things a lot of the time (compare LoTR trilogy Galadriel basically being mind-crushing and haunting most of the time) ... but talk as though they're reading directly from the bible or Silmarillion. Same for Halbrand/Annatar/Sauron. The construction of the rings is a clue into this I think, where they've attempted to portray it as powerful and important, but there's absolutely no sense of how in the world they're magical, no indication that there's some special elven craft behind them. Just "add mithril and get powerful rings". Bombadil's dialogue seemed the same to me. Talking about being the eldest as though he's talking about what happened last week. Now in that character this sort of approach makes the most sense. But even so, there didn't seem to be any joy, jolly or aloofness about the character to signal how old he must be to be casual about witnessing the beginning of time. And there's always the concern the show should have for making us the viewer *feel* what's happening on screen ... and I don't think we felt Bombadil's mysteriousness much at all. Compare with, in the LoTR books, Tolkien using a wonderful way of showing that ... the one ring had no affect no him whatsoever to the point that he could see Frodo while he was wearing it. The only breath of fresh air so far has been the dark wizard, which clearly takes cues from Saruman. It's probably been the only sense stylistically I've gotten that we're in a lost age of a fantasy world. One take I had from season 1 was that RoP's biggest problem might be that it's being made after Game of Thrones not before it. That GoTs is absolutely the wrong influence for a show like this and yet is likely to have one due to its pervasive success. And I feel like I may have been right about that. The Tolkien world and GoT "politics and intrigue" are not compatible. Moreover, I suspect the GoT style may have run its course somewhat. A show like RoP was a chance to try something interestingly mystical and I don't think the creators were up to the challenge, perhaps not at all. :::

    12
    8
    www.newsweek.com

    While territorial claims are and will likely be heated, what struck me is that the area is right near the Drake Passage, in the [Weddell Sea](https://en.wikipedia.org/wiki/Weddell_Sea) (which is fundamental to the world's ocean currents AFAIU). I don't know how oil drilling in the antarctic could affect the passage, but still, I'm not sure I would trust human oil hunger with a 10ft pole on that one. Also interestingly, the discovery was made by Russia, which is a somewhat ominous clue about where the current "multi-polar" world and climate change are heading. Antarctica, being an actual continent that thrived with life up until only about 10-30 M yrs ago, is almost certainly full of resources.

    287
    67
    https://www.youtube.com/watch?v=mn2Dcy-NDTw

    It's funny, at time of posting, many of the YT comments are *very* nostalgic about how much has happened in this 8 year period ... and I can't lie, I feel it too god damn it.

    67
    5

    Seems like fertile ground for coming up with something fun and interesting ... a whole shadow universe that barely touches ours ... but I don't think I've ever seen it.

    26
    24

    Rant … ::: spoiler spoiler I’m talking about Ash/Rook, obviously. Just saw the film recently, and while it’s a bit of a love it or hate it film I think, the Rook character is I think objectively egregious. The idea is good, IMO, in a number of ways, and I can understand that the film makers felt like it was all done with love and affection for Holm and the character. As a viewer, not necessarily onboard with how many cues the film was taking from the franchise, I noticed the silhouette of Rook pretty quickly and was quite happy/hyped to see where it would go. **But OMG the execution is unforgivable!** And I feel like this is just so much of what’s wrong with Hollywood and VFX, and also indicates that some execs were definitely intervening in this film. Somewhat fortunately for the film, it had a low budget (AFAICT, by Wikipedia) and is making a profit. But it’s no excuse to slap some bad CGI onto shots that were not designed for bad CGI. Close ups on the uncanny valley! Come on! AFAICT, bad CGI is often the result of a complete disconnect between the director and the VFX crew, in part because the VFX industry is kept at arms length from the film industry, despite (it because of) its massive importance. That CGI is not something you do a close up on. No remotely decent director would have done that knowing the CGI looked like that. This is likely bad studio management creating an unworkable situation. What could have worked much better IMO is don’t have the synth functioning well. Have its facial expressions and movements completely artificial and mechanical. Rely on the likeness of Holm and the AI voice (which did and generally do work well). Could have been done just with a well directed animatronic coupled with some basic CGI to enrich some textures and details. Instead we got a dumb “we’ll do it in post” and tortured some poor editor into cutting those shots together. For many the film was a mixed bag. For me too. But this somehow prevents me from embracing it because I just don’t trust the people who made it. ::: … End rant.

    24
    16
    https://www.youtube.com/watch?v=IToAClt_utU

    A nice and fair comparison I thought. The main difference, it seems, was the styles of the two films, where a bunch of stylistic choices rather disparate from whether CGI was used or not separate the two. My take after seeing furiosa was that it's biggest flaw was that its makers struggled with the expectations of Fury Road and I think these stylistic differences kinda support that, where I'd guess they felt like they had to go with a different look and not simply repeat Fury Road's aesthetic when in the end there may not have been much of a coherent artistic purpose behind those changes.

    9
    0
    https://www.youtube.com/watch?v=jQs7jb25WNY

    New genre just dropped! I've liked some of the other things this guy has done, but didn't get into this track at first. As I kept watching though, I got more and more into it and am certain I'd be down for an album of this stuff.

    14
    4

    *Yes, I'm slow, sorry!* Now this may very well be excessive expectations. I had heard a few people say it's this year's Andor. IE, you should just watch it even if it's not the sort of thing you think you'd be into. Also, I've never played the games **I've just finished the first 2 episodes**, and, for me, it's not bad, it's a kinda interesting world ... but there's a distinctly empty feeling and awkwardness to the show for me. Sometimes scenes feel like they're either filling time or still trying to find their rhythm. I'm not sure any of the dialogue has caught my ear (at all). I'm not sure I've picked up on any interesting stakes or mysteries. And I've often wondered about the directing (where I can't help but wonder if Jonathan Nolan's directing is more about trying to compete with his brother). The soft tipping point for me was the Knight's fight with the Ghoul (episode 2) ... it just felt pointless and childish. The whole scene seemed to strangely lack any gravity or impetus. And I find myself ~2.5 hrs in and not caring about anything that's happening. It's a post nuclear apocalypse world, with some mutants, a naive bunker person, and a manipulative corporation or two doing sneaky shit ... ... dunno ... what am I missing? Should I just keep watching?

    33
    31
    https://www.youtube.com/watch?v=mpHlhNpc_ko

    Watching this, and seeing more of these types of interviews from Corridor Crew, it struck me that it's filling the void left by death of DVDs/BluRays and their special features.

    11
    0

    # Intro Having read through the macros section of "The Book" (Chapter 19.6), I thought I would try to hack together a simple idea using macros as a way to get a proper feel for them. The chapter was a little light, and declarative macros (using `macro_rules!`), *which is what I'll be using below*, seemed like a potentially very nice feature of the language ... the sort of thing that really makes the language malleable. *Indeed, in poking around I've realised, perhaps naively, that macros are a pretty common tool for rust devs (or at least more common than I knew).* I'll rant for a bit first, which those new to rust macros may find interesting or informative (it's kinda a little tutorial) ... *to see the implementation, go to "Implementation (without using a macro)" heading and what follows below*. # Using a macro Well, "declarative macros" (with `macro_rules!`) were pretty useful I found and easy to get going with (such that it makes perfect sense that they're used more frequently than I thought). * It's basically pattern matching on arbitrary code and then emitting new code through a templating-like mechanism (pretty intuitive). * The type system and `rust-analyzer` `LSP` understand what you're emitting perfectly well in my experience. It really felt properly native to rust. ## The Elements of writing patterns with "Declarative macros" **Use `macro_rules!` to declare a new macro** Yep, it's also a macro! **Create a structure just like a `match expression`** * Except the pattern will match on the code provided to the new macro * ... And uses special syntax for matching on generic parts or fragments of the code * ... And it returns new code (not an expression or value). **Write a pattern as just rust code with "generic code fragment" elements** * You write the code you're going to match on, *but for the parts that you want to capture as they will vary from call to call*, you specify variables (or more technically, "metavariables"). * You can think of these as the "arguments" of the macro. As they're the parts that are operated on while the rest is literally just static text/code. * These variables *will have a name and a type*. * The name as prefixed with a dollar sign `$` like so: `$GENERIC_CODE`. * And it's type follows a colon as in ordinary rust: `$GENERIC_CODE:expr` * These types are actually syntax specifiers. They specify what part of rust syntax will appear in the fragment. * *Presumably,* they link right back into the rust parser and are part of how these macros integrate pretty seamlessly with the type system and borrow checker or compiler. * Here's a decent list from rust-by-example (you can get a full list in the [rust reference on macro "metavariables"](https://doc.rust-lang.org/reference/macros-by-example.html#metavariables)): * `block` * `expr` is used for expressions * `ident` is used for variable/function names * `item` * `literal` is used for literal constants * `pat` (pattern) * `path` * `stmt` (statement) * `tt` (token tree) * `ty` (type) * `vis` (visibility qualifier) *So a basic pattern* that matches on any `struct` while capturing the `struct`'s name, its only field's name, and its type would be: ```rust macro_rules! my_new_macro { ( struct $name:ident { $field:ident: $field_type:ty } ) } ``` Now, `$name`, `$field` and `$field_type` will be captured for any single-field `struct` (and, *presumably*, the validity of the syntax enforced by the "fragment specifiers"). **Capture any repeated patterns with `+` or `*`** * Yea, just like `regex` * Wrap the repeated pattern in `$( ... )` * Place whatever separating code that will occur between the repeats *after* the wrapping parentheses: * EG, a separating comma: `$( ... ),` * Place the repetition counter/operator after the separator: `$( ... ),+` ### Example *So, to capture multiple fields in a `struct`* (expanding from the example above): ```rust macro_rules! my_new_macro { ( struct $name:ident { $field:ident: $field_type:ty, $( $ff:ident : $ff_type: ty),* } ) } ``` * This will capture the first field and then any additional fields. * The way you use these repeats mirrors the way they're captured: they all get used in the same way and rust will simply repeat the new code for each repeated captured. ## Writing the emitted or new code **Use `=>` as with match expressions** * Actually, it's `=> { ... }`, IE with braces (not sure why) **Write the new emitted code** * All the new code is simply written between the braces * Captured "variables" or "metavariables" can be used just as they were captured: `$GENERIC_CODE`. * *Except types aren't needed here* * Captured repeats are expressed within wrapped parentheses just as they were captured: `$( ... ),*`, including the separator (which can be different from the one used in the capture). * The code inside the parentheses can differ from that captured (*that's the point after all*), but at least one of the variables from the captured fragment has to appear in the emitted fragment so that rust knows which set of repeats to use. * A useful feature here is that the repeats can be used multiple times, in different ways in different parts of the emitted code (the example at the end will demonstrate this). ### Example *For example*, we could convert the `struct` to an `enum` where each field became a variant with an enclosed value of the same type as the `struct`: ```rust macro_rules! my_new_macro { ( struct $name:ident { $field:ident: $field_type:ty, $( $ff:ident : $ff_type: ty),* } ) => { enum $name { $field($field_type), $( $ff($ff_type) ),* } } } ``` With the above macro defined ... this code ... ```rust my_new_macro! { struct Test { a: i32, b: String, c: Vec<String> } } ``` ... will emit this code ... ```rust enum Test { a(i32), b(String), c(Vec<String>) } ``` # Application: "The code" before making it more efficient with a macro *Basically ... a simple system for custom types to represent physical units.* ## The Concept (and a rant) A basic pattern I've sometimes implemented on my own (without bothering with dependencies that is) is creating some basic representation of physical units in the type system. Things like meters or centimetres and degrees or radians etc. If your code relies on such and performs conversions at any point, *it is way too easy to fuck up*, and therefore worth, IMO, creating some safety around. NASA provides an obvious warning. As does, IMO, common sense and experience: most scientists and physical engineers learn the importance of "dimensional analysis" of their calculations. In fact, it's the sort of thing that should arguably be built into any language that takes types seriously (like eg rust). I feel like there could be an argument that it'd be as reasonable as the numeric abstractions we've worked into programming?? At the bottom I'll link whatever crates I found for doing a better job of this in rust (one of which seemed particularly interesting). ## Implementation (without using a macro) The essential design is (again, this is basic): * A single type for a particular dimension (eg time or length) * Method(s) for converting between units of that dimension * *Ideally*, flags or constants of some sort for the units (thinking of enum variants here) * These could be methods too ```rust #[derive(Debug)] pub enum TimeUnits {s, ms, us, } #[derive(Debug)] pub struct Time { pub value: f64, pub unit: TimeUnits, } impl Time { pub fn new<T: Into<f64>>(value: T, unit: TimeUnits) -> Self { Self {value: value.into(), unit} } fn unit_conv_val(unit: &TimeUnits) -> f64 { match unit { TimeUnits::s => 1.0, TimeUnits::ms => 0.001, TimeUnits::us => 0.000001, } } fn conversion_factor(&self, unit_b: &TimeUnits) -> f64 { Self::unit_conv_val(&self.unit) / Self::unit_conv_val(unit_b) } pub fn convert(&self, unit: TimeUnits) -> Self { Self { value: (self.value * self.conversion_factor(&unit)), unit } } } ``` So, we've got: * An `enum` `TimeUnits` representing the various units of time we'll be using * A `struct` `Time` that will be any given `value` of "time" expressed in any given `unit` * With methods for converting from any units to any other unit, the heart of which being a `match expression` on the new unit that hardcodes the conversions (relative to base unit of seconds ... see the `conversion_factor()` method which generalises the conversion values). *Note:* I'm using `T: Into<f64>` for the `new()` method and `f64` for `Time.value` as that is the easiest way I know to accept either integers or floats as values. It works because `i32` (and most other numerics) can be converted lossless-ly to `f64`. *Obviously you can go further than this. But the essential point is that each unit needs to be a new type with all the desired functionality implemented manually or through some handy use of blanket trait implementations* # Defining a macro instead *For something pretty basic, the above is an annoying amount of boilerplate!!* May as well rely on a dependency!? Well, we can write the boilerplate once in a macro and then only provide the informative parts! In the case of the above, the only parts that matter are: * The name of the type/`struct` * The name of the units `enum` type we'll use (as they'll flag units throughout the codebase) * The names of the units we'll use and their value relative to the base unit. IE, for the above, we only need to write something like: ```rust struct Time { value: f64, unit: TimeUnits, s: 1.0, ms: 0.001, us: 0.000001 } ``` *Note: this isn't valid rust!* But that doesn't matter, so long as we can write a pattern that matches it and *emit* valid rust from the macro, it's all good! (Which means we can write our own little DSLs with native macros!!) To capture this, all we need are what we've already done above: capture the first two fields and their types, then capture the remaining "field names" and their values in a repeating pattern. ## Implementation of the macro **The pattern** ```rust macro_rules! unit_gen { ( struct $name:ident { $v:ident: f64, $u:ident: $u_enum:ident, $( $un:ident : $value:expr ),+ } ) } ``` * Note the repeating fragment doesn't provide a type for the field, but instead captures and expression `expr` after it, *despite being invalid rust*. **The Full Macro** ```rust macro_rules! unit_gen { ( struct $name:ident { $v:ident: f64, $u:ident: $u_enum:ident, $( $un:ident : $value:expr ),+ } ) => { #[derive(Debug)] pub struct $name { pub $v: f64, pub $u: $u_enum, } impl $name { fn unit_conv_val(unit: &$u_enum) -> f64 { match unit { $( $u_enum::$un => $value ),+ } } fn conversion_factor(&self, unit_b: &$u_enum) -> f64 { Self::unit_conv_val(&self.$u) / Self::unit_conv_val(unit_b) } pub fn convert(&self, unit: $u_enum) -> Self { Self { value: (self.value * self.conversion_factor(&unit)), unit } } } #[derive(Debug)] pub enum $u_enum { $( $un ),+ } } } ``` Note the repeating capture is used twice here in different ways. * The capture is: `$( $un:ident : $value:expr ),+` And in the emitted code: * It is used in the `unit_conv_val` method as: `$( $u_enum::$un => $value ),+` * Here the `ident` `$un` is being used as the variant of the `enum` that is defined later in the emitted code * Where `$u_enum` is also used without issue, as the name/type of the `enum`, despite not being part of the repeated capture but another variable captured outside of the repeated fragments. * It is then used in the definition of the variants of the enum: `$( $un ),+` * Here, only one of the captured variables is used, which is perfectly fine. ## Usage Now all of the boilerplate above is unnecessary, and we can just write: ```rust unit_gen!{ struct Time { value: f64, unit: TimeUnits, s: 1.0, ms: 0.001, us: 0.000001 } } ``` *Usage from `main.rs`:* ```rust use units::Time; use units::TimeUnits::{s, ms, us}; fn main() { let x = Time{value: 1.0, unit: s}; let y = x.convert(us); println!("{:?}", x); println!("{:?}", x); } ``` *Output:* ```rust Time { value: 1.0, unit: s } Time { value: 1000000.0, unit: us } ``` * Note how the `struct` and `enum` created by the emitted code is properly available from the module as though it were written manually or directly. * In fact, my LSP (`rust-analyzer`) was able to autocomplete these immediately once the macro was written and called. # Crates for unit systems I did a brief search for actual units systems and found the following ## `dimnesioned` **[`dimensioned` documentation](https://docs.rs/dimensioned/latest/dimensioned/index.html)** * Easily the most interesting to me (from my quick glance), as it seems to have created the most native and complete representation of physical units in the type system * It creates, through types, a 7-dimensional space, one for each SI base unit * This allows all possible units to be represented as a reduction to a point in this space. * EG, if the dimensions are `[seconds, meters, kgs, amperes, kelvins, moles, candelas]`, then the `Newton`, `m.kg / s^2` would be `[-2, 1, 1, 0, 0, 0, 0]`. * This allows all units to be mapped directly to this consistent representation (*interesting!!*), and all operations to then be done easily and systematically. Unfortunately, I'm not sure if the [repository](https://github.com/paholg/dimensioned) is still maintained. ## uom **[uom documentation](https://docs.rs/uom/latest/uom/)** * This might actually be good too, I just haven't looked into it much * It also seems to be currently maintained ## F# Interestingly, `F#` actually has a system built in! * See [learning documentation on `F#` here](https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/units-of-measure) * Also this older (2008) [series of blogs on the feature here](https://learn.microsoft.com/en-us/archive/blogs/andrewkennedy/units-of-measure-in-f-part-one-introducing-units)

    15
    14

    I looked around and struggled to find out what it does? My guess would be that it notifies you of when new posts are made to communities you subscribe to. But that sounds like a lot, so I'm really not sure. Otherwise, is it me or does the wording here not speak for itself?

    15
    20
    www.bain.com

    Report showing the shift in AI sentiment in the industry. Relatively in depth and probably coming from a pro-AI bias (I haven’t read the whole thing). Last graph at the bottom was what I was linked to. Clearly shows a corner turning where those closer to the actual “product” are now sceptical while management (the last category in the chart) are more committed.

    11
    0

    Generally, the lens I've come to criticise any/all fediverse projects is how well they foster community building. One reason why I like and "advocate" for the lemmy/threadiverse side of things is precisely because of this and how the centrality of the community/sub/group is a good way of organising social media (IMO). Also, because of that, I recently came to be skeptical of the effects that the "All" feed can have. I didn't even realise that people relied mostly on the All feed until recently. I think I've reached the point now of being against it (at least tentatively). I know, it's a staple and there's no way it's going away. And I know it's useful. But thinking about the feature set, through the community building lens, I think it'd be fair to say that things are out of balance: they don't promote community building enough while also providing the All feed which dissolves community building. Not really a criticism of the developers ... AFAIU, the All feed is easier to implement than any other community building feature ... and it's expected from reddit (though it isn't normal on forums AFAICT, which is maybe worth considering for anyone happy to reassess what about reddit is retained and what isn't). But still, I can imagine a platform that is more focused on communities: * Community explorer tool built in. * Could even be a substitute for an All feed ... where you can browse through various communities you don't know about and see what they've posted recently * Multi-communities (long time coming by now for many I'd say) * Could even be part of the community explorer tool where you can create on-the-fly multi-communities to see their posts in a temporary feed * Private and local only communities (already here on lemmy and coming for private communities) * Post visibility options for Public communities (IE, posts that opt-in private) * More flexible notifications for various things/events that happen within a community * Wikis * Chat interface * I'm thinking this is pretty viable given that Lemmy used to use a web-socket auto-updating design ... add that to the flat chat view and you've got a chat room. There are resource issues, so limiting them to one per community or 6hrs per week per community or something would probably be necessary. A possibly interesting and frustrating aspect of all of these suggestions/ideas above is I can see their federation being problematic or difficult ... which raises the issue of whether there's serious tension between platform design and protocol capabilities.

    27
    18
    https://www.youtube.com/watch?v=QoSdJB4D3Fc

    There are also some gems in there about how old and constant underplaying the amount of VFX in a film is. From the video, Stand By Me had a VFX shot (the train bridge scene, of course) but no one was allowed to talk about that. And of course The Fugitive train crash scene had to have "real trains" even though it's all mostly miniatures.

    19
    7
    https://pony.social/@thephd/112818744298401332

    The post mentions data or research on how rust usage in is resulting in fewer errors in comparison to C. Anyone aware of good sources for that?

    10
    8

    *Lets try this experiment* Start watching **Big Trouble in Little China** at **7pm**, **Central Time, USA** (as precisely as you can) ... and come here for live posts as you watch! *This is ~24 hours from the time of this post* Here's a [timeanddate.com link to the timezone(s) involved](https://www.timeanddate.com/worldclock/converter.html?iso=20240727T235800&p1=tz_aet&p2=tz_cest&p3=tz_ct). --- [@AVincentInSpace@pawb.social](https://pawb.social/u/AVincentInSpace) jas volunteered to run a live watch on cytube the day afterward (approx 7pm Monday). Posts and links should be coming (and see comments below on the idea).

    16
    9

    *Lets try this experiment* Start watching **Big Trouble in Little China** at **7pm**, **Central European Summer Time** (as precisely as you can) ... and come here for live posts as you watch! *This is ~17 hours from the time of this post* Here's a [timeanddate.com link to the timezone(s) involved](https://www.timeanddate.com/worldclock/converter.html?iso=20240727T235800&p1=tz_aet&p2=tz_cest&p3=tz_ct).

    8
    1