I’m finding myself with a couple of really big databases and my PC is throwing memory errors so I’m moving the project to polars and learning on the way in, and would like to read your experience in how you did it, what frustrate you and what you found good (I’m still getting used with the syntax, but I’m loving how fast it reads the databases)

  • 8uurg@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    10 days ago

    Polars has essentially replaced Pandas for me. It is MUCH faster (in part due to lazy queries) and uses much less RAM, especially if the query can be streamed. While syntax takes a bit of getting used to at first, it allows me to specify a lot more without having to resort to apply with custom Python functions.

    My biggest gripe is that the error messages are significantly less readable due to the high amount of noise: the stacktrace into the query executor does not help with locating my logic error, stringified query does not tell me where in the query things went wrong…

    • driving_crooner@lemmy.eco.brOP
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      I had to move away from apply a while ago because it was extremely slow, and started using masks and vectorize operations. That’s actually what is being a roadblock for me right now, can’t find a way to make it work (use to do df.loc[mask, ‘column’], but df.with_columns(pl.when(mask).then()…) is not working as expected)

      • 8uurg@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 days ago

        It is unclear to me what you are trying to accomplish, do you want to update the elements for where masked?

        • driving_crooner@lemmy.eco.brOP
          link
          fedilink
          arrow-up
          1
          ·
          9 days ago

          There’s this categorical column of integers that have some excepcional cases where some letters are included. I need to process the column except the excepcional cases to format the column, but I just found put that it was giving me a problem because pandas imported it as mixed type while polars just imported it as string respecting the original correct formatting.

  • misk@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    10 days ago

    I thought I’d be using Polars more but in the end, professionally, when I have to process large amounts of data I won’t be doing that on my computer but on a Hadoop cluster via PySpark which also has a very non-pythonic syntax. For smaller stuff Pandas is just more convenient.

    • driving_crooner@lemmy.eco.brOP
      link
      fedilink
      arrow-up
      3
      ·
      9 days ago

      My company is moving to databricks, that I know uses pyspark but never used it, guess eventually I’m going to have to learn it too.

  • gigachad@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    10 days ago

    Nope. I am working with geodata so I need geopandas for my work. Sadly, there is no serious alternative until now. If, in the future, that will change, I am am absolutely on board giving polars a try.