What you say describes my experience 10 to 15 years ago, not my experience today. Compare the settings dialog in KDE Plasma to the windows settings dialog for instance. Or should I say myriad of Windows settings dialogues.
What you say describes my experience 10 to 15 years ago, not my experience today. Compare the settings dialog in KDE Plasma to the windows settings dialog for instance. Or should I say myriad of Windows settings dialogues.
What was difficult in your experience?
I found basic functioning of worktrees to fail with submodules. The worktree doesn’t know about submodules, and again and again messes up the links to it. Basic pulling, switching branches, …, all of this frequently fails to work because the link to the submodule is broken. I ended up creating the submodules as worktrees of a separate checkout of the submodule repo, and recreating these submodule worktrees over and over. I pretty much stopped using worktrees at that point.
Have you tried the global git config to enable recursive over sub modules by default?
Nope, fingers crossed it helps for you ;) Unrelated to worktrees but: in the end I like submodules in theory but found them to be absolutely terrible in practice, that’s without even factoring in the worktrees. So we went back to a monorepo.
I’m a C++ dev, I have one checkout of the main repo and 3 worktrees. Switching branches can be expensive because of recompiles, so to do e.g. quick fixes I’ll use worktree 1 where I typically don’t even compile the code, just make the fix and push it to the CI system. Worktrees 2 and 3 I keep at older releases so I can immediately fire up development and one of those releases side by side and compare results as well as the code.
The cool thing about worktrees instead of multiple checkouts is that you only have one .git folder, so less disk space. But more importantly local branches (well everything actually) are shared, so you can create a local branch in the main checkout, and later come back to it in a worktree. You also don’t need fetching/… in the worktrees, as they share the same .git folder.
Only thing that I found virtually impossible to work with is worktree + submodules.
To me that sounds like “that machine prototype is inefficient - just skip the prototype next time and build the real thing right away.”
I don’t think you understand my point, which is that developing the prototype takes e.g. 50% more time than it should because of complete lack of understanding of software development.
Mostly ML or data processing libraries I would assume, I’ve read tons of REST server and ORM python code for instance, none of that is written in C.
Wrt rust: no experience with that. I do do a lot of C++, there you quickly reach the end as typically you’re consuming quite a bit of libraries but the complete sources of those aren’t part of what is parsed by the IDE as keeping all that in memory would be unworkable.
My point about the jumping into was that you can immediately start reading the sources. Most alternative languages are compiled in some form or other so all you’ll see is an API, not the implementation.
As a researcher: all the professional software engineers here have no idea about the requirements for code in a research setting.
As someone with extensive experience in both: my first requirement would be readability. Single python file? Fine with that. 1k+ lines single python file without functions or other means of structuring the code: please no.
The nice thing about python is that your IDE let’s you jump into the code of the libraries you’re using, I find that to be a good way to look at how experienced python devs write code.
Odd take imo. OP is a programmer, albeit perhaps not a very good one. Did a PhD (computational astrophysics), been working as a professional dev for 10 years after that. Imo a good programmer writes code that solves the problem at hand, I don’t see that much of a difference between the problem being scientific or a backend service. It doesn’t mean “write lots of boilerplate-y factories, interfaces and other layers” to me, neither in research nor outside of it.
That being said, there is so much time lost in research institutes because of shoddy programming by researchers, or simply ignorance, not knowing a debugger exists for instance. OP wanting to level up their game would almost certainly result in getting to research results faster, + they may be able to help their peers become better as well.
And then those methods grow and grow, or stop making sense, or start meaning something else, and you would have to go through the same abstract-deprecate-remove again. Rinse and repeat and if you do this regularly enough you have web development where you get your feet swept from under you every couple of years.
It’s a bit of a pick your poison situation, for me the backwards compatibility path is the right call here though.
If I wrote an IDE and detected tabs I’d just have it delete the codebase
Qt Creator also embeds a terminal now, I immediately switch it off, but I’m probably an atypical user. I always have a separate terminal open instead, where i typically have 4 or 5 tabs open.
The dosubot also downvoted its own first post in that thread lmao
Rebasing is basically copy/paste of commits. I do it all the time, to keep a feature branch updated with develop for instance.
By using big data on the IOT of course!
I read through the better part of a linked thread: https://forum.dlang.org/thread/[email protected]?page=1. And wow, as a C++ user, I’m not sure if I should feel blessed about how stable and backwards-compatible the language is, or that D users must be bonkers to put up with the breakages. Using C++ both professionally and for hobby projects, in the last 5 or so years I can remember encountering exactly 1 (gcc) compiler bug. There was a simple workaround + someone else had already reported it so with the next minor update the bug was fixed. And the code that triggered it was a nested CRTP spawn of hell so I didn’t blame the compiler from borking on it in the first place, it would’ve been better for everyone had it never compiled :p
Upgrading a major C++ compiler version was never free in my experience, but even when working in a codebase with ~2M LOC the upgrade (e.g. 14 -> 17) was something that could be prepared in a set of feature branches by one person over the span of one, maybe two weeks. That’s for fixing compile errors, I don’t remember if we had issues with runtime errors due to an upgrade, but if we did it must’ve been minor because I remember the transition to 17 was pretty smooth. Note that 14 -> 17 requires changing the requested C++ version for the project, which is different from upgrading the actual compiler, i.e. you can do the latter without the former and your code should not require any changes.
Indeed. They say they’ve been repeatedly featured on the front page of HN and the site didn’t fall over, I’ve seen many examples that did.
It would be odd to not have HR involved in hiring imo. When I was hiring for my team I was happy HR was involved, I gauged technical ability + fit for the team, HR gauged general fit with the company. We’d then have a chat afterwards to compare and see whether we would move forward with the candidate, and honestly the opinions were always along the same lines. It took some of the responsibility off my back knowing that the candidate received the green light from an independent party as well.
This presented a fraudulent focus on diversity.
What a day to be able to read
Wonder how much of this relates to SUSE? How “normie-tolerant” is that? I’ve been printing for years without any issues for instance, and have a HP printer that used to hate my linux OS with a passion.