Here are some ways to write data out to HBase from Spark:
HBase supports Bulk loading from HFileFormat files. This is a very efficient way to load a lot of data into HBase, as HBase will read the files directly and doesn't need to pass through the usual write path (which includes extra logic for resiliency). Use the "saveAsHadoopFile" or "saveAsNewAPIHadoopFile" method of a pair rdd to save out to HFileFormat.
If the data size is small enough to be collected to the Spark executor, you can collect your results and open a regular HBase connection to write your results.
You can also call mapPartition and write the results to HBase directly from the worker nodes. Make sure to open the Hbase connection in the MapPartition function, so that the connection is established directly on the worker node, rather then opening a connection on the executor and trying to serialize that connection to to the worker (you'll get a NotSerializable exception if you do that.) You may need to do some performance tuning/optimization with this method.
NOTE: There are a number of open source projects that connect Spark with HBase out there - you can search Spark packages for those, or search on the internet. I haven't used them personally, so I am not familiar enough to recommend one though.