Does your cube is optimized ?
if not, then Analysis Services used a joined query with each dimension table
to insure that each row read from the database has a member for each
dimension.
so, control if your source table is correctly filled and each row is
correctly linked to your dimension table.
For example, if you input table is linked to a "product" table, added an
"unknown" product, and link all rows without a good product to this unknown
product. (and do this for each dimension)
finally, if your input table is correctly configured and each row is
correctly linked to each table, then you can optimize your cube to
accelerate the process time.
Good luck.
Quote:
> I have been creating large cubes very successfully. I was
> on my 8th one when it didn't process properly. My SQL
> table has 14,953,206 rows. The cube process ended without
> error but only had 8,917,056 rows accounted for.
> I copied my table starting at row 8,900,000 and started
> the process again. This time out of 6,053,207 rows I
> ended up with 3,710,303.
> Does anyone have a clue where to start to figure this out?