I’m finding myself with a couple of really big databases and my PC is throwing memory errors so I’m moving the project to polars and learning on the way in, and would like to read your experience in how you did it, what frustrate you and what you found good (I’m still getting used with the syntax, but I’m loving how fast it reads the databases)
Polars has essentially replaced Pandas for me. It is MUCH faster (in part due to lazy queries) and uses much less RAM, especially if the query can be streamed. While syntax takes a bit of getting used to at first, it allows me to specify a lot more without having to resort to
apply
with custom Python functions.My biggest gripe is that the error messages are significantly less readable due to the high amount of noise: the stacktrace into the query executor does not help with locating my logic error, stringified query does not tell me where in the query things went wrong…
I had to move away from apply a while ago because it was extremely slow, and started using masks and vectorize operations. That’s actually what is being a roadblock for me right now, can’t find a way to make it work (use to do df.loc[mask, ‘column’], but df.with_columns(pl.when(mask).then()…) is not working as expected)
It is unclear to me what you are trying to accomplish, do you want to update the elements for where masked?
There’s this categorical column of integers that have some excepcional cases where some letters are included. I need to process the column except the excepcional cases to format the column, but I just found put that it was giving me a problem because pandas imported it as mixed type while polars just imported it as string respecting the original correct formatting.
I thought I’d be using Polars more but in the end, professionally, when I have to process large amounts of data I won’t be doing that on my computer but on a Hadoop cluster via PySpark which also has a very non-pythonic syntax. For smaller stuff Pandas is just more convenient.
My company is moving to databricks, that I know uses pyspark but never used it, guess eventually I’m going to have to learn it too.
Nope. I am working with geodata so I need geopandas for my work. Sadly, there is no serious alternative until now. If, in the future, that will change, I am am absolutely on board giving polars a try.
I moved from pandas.
that’s it, there is no polars. Its been great !
I am still using R-lang dataframes…, Or tibbles :)