You are not looking well enough, it's literally on the right side of the forum in "Quick Links", everything you need to know is explained in the guide and FAQ has commonly asked questions, use ctrl+f to quickly search for keywords like fan-dst, mask, converter.
https://mrdeepfakes.com/forums/thread-guide-deepfacelab-explained-and-tutorials
https://mrdeepfakes.com/forums/thre...d-tips-for-making-deepfakes-using-deepfacelab
As for using software like facegen you are not gonna get more angles, this type of software cannot guess properly how face looks from an angle it didn't saw, AI can help it a bit and thats why DFL works at all (by being able to match up those angles and figure some stuff out by learning about face, but it still won't guess how left side of the face looks like if it didn't saw it in SRC, it's all based on estimations, guesses, clever algorithms.
Also software like facegen or more advanced solutions used for photogrammetry/3d scanning require data that is of the same subject, that is completely still, has neutral facial expression and is lit evenly from all sides to not cause any shadows to form on face which would confuse algorithm, without any of that the scan and reconstruction of 3D model would fail.
You can't just throw few pics into it a face scanning software and hope it can find the same spots on few pictures that have been taken in completely different light conditions, at different angles, with different camera settings or even different camera/lens/focal point and where one pictures is of someone smiling and the other one has open mouth or is angry.
If it would work like that we wouldn't need AI and train this stuff for days, we would just threw few pics, computer would make a 3D model of face and we would animate that with data from our target/destination video. At least for now, with currently available software we just need all that data, AI can guess a bit but only as long as it knows how stuff should look like.
If someone only ever showed you people from the front how would you know how someone looks from the back? Or from the side. Hopefully as time passes we will get something better that's lets us use less pictures and have such models that already know how people smile, how usually faces look from different angles so it can adapt faster and easier.
If you have trouble finding enough angles you can for example pretrain you model with similar looking celebrity or try to avoid angles in videos for which you don't have necessary face angles in your src dataset.