I agree with the comment that feature engineering is often domain-dependent and good features can be developed by understanding the underlying process/domain.
That being said, there are also generic techniques, like target encoding of categorical features or embeddings, that are helpful to be aware of so that you can represent that domain knowledge in the most effective ways. This is not "instead of" domain knowledge, think of it as a compliment. The general goal is to present the right information in the best representation for your algorithm to learn from.
Some specific resources starting from more concrete/tactical to more conceptual:
https://www.coursera.org/learn/competitive-data-science
This is a free online course that will give you a good summary of some feature engineering techniques. I don't recommend everything in it, as some is Kaggle-specific and not necessarily useful in industry (e.g. purposefully pursuing data leaks, massive stacks of model ensembles).
https://fast.ai
You may also find a course like fast.ai helpful as deep learning is really a lot about using various techniques to learn features that cannot be constructed manually.
https://sites.astro.caltech.edu/\~george/ay122/cacm12.pdf
Finally, here is a classic and fundamental paper that you may find insightful about ML and feature engineering/data representation in general.