Fabric Study Notes
Load data into Lakehouse.
%%python # this create a managed delta table, parquet file will be managed under the Tables folder. When table is deleted, associated # parquet files will be auto deleted as well df = spark.read.load(path='Files/Data/sales.csv',format='csv',header=True) df.write.format('delta').saveAsTable('test') %%python # we can create an external delta table, parquet file will be saved under external location specified. When table is deleted, # associated parquet files will not be auto deleted. df = spark.
Posted by John Liu Wednesday, December 18, 2024