"I think that guarantee-ably "Friendly" AI is a chimera. Guaranteeing anything about beings massively smarter than ourselves seems implausible. But, I suspect we can bias the odds, and create AI systems that are more likely than not to be Friendly....
To do this, we need to get a number of things right:
(None of these things gives any guarantees, but combined they would seem to bias the odds in favor of a positive outcome.)

1) build our AI systems with the capability to make ethical judgments both by rationality and by empathy
2) interact with our AI systems in a way that teaches them ethics and builds an emotional bond
3) build our AI systems with rational, stable goal systems (which humans don't particularly have)
4) develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to
superhuman intelligence, so we can watch and study what happens and adjust accordingly ... and that
probably means trying to develop advanced AI soon, since the more advanced other technologies are by the time advanced AI comes about, the more likely a hard takeoff is...
5) integrate our AIs with the "global brain" of humanity so that the human race can democratically impact the AI's goal system
6) create a community of AIs rather than just one, so that various forms of social pressure can mitigate against any one of the AIs running amok"