This is a reference post for me, mostly - I keep forgetting this stuff about SSAS... particularly cause #2.
Error
Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: [table name], Column: [column name], Value: [value]. The attribute is [attribute name].
Explanation
This means SSAS could not find a record in the dimension table [table name] where column [column name] contained value [value]. Essentially, a "SELECT COUNT(*) FROM [table name] WHERE [column name] = [value]" returned zero.
Cause(s) and Solution(s)
#1 - Errors in ETL
This doesn't happen to me - but is apparently most the common cause: You've loaded facts into the fact table, but haven't loaded the requisite dimension members into the dimension table. (This may be an ETL design error, or a true data error.) This can ONLY happen if you're not using surrogate keys to map your facts to your dimensions. (Which is why it can't happen to me. If you're using surrogate keys, one of your steps in processing the fact data is to look up the surrogate keys - where you'll end up "not finding" one, and have to deal with it there, in your ETL, instead of in cube processing.)
==> Load your dimensions properly. If you can rearchitect your warehouse to use surrogate keys for dimension members - DO IT.
#2 - Bad Processing Order
This is (apparently) fairly typical. You may have instructed SSAS to process a Fact table (measure group) before the dimensions it references have been processed. For example, you've run an ETL process to update your RDBMS warehouse. The ETL added some new dimension members, and new facts which refer to those new dimension members. If you process the measure group first, it will try to look up the new dimension keys in the "old" dimension.
==> Process the dimension(s), then the cube.
#3 - NULLs in Your Fact or Dimension Data
Apparently also common due to (IMO) bad warehousing design.
If you have a NULL value in one of your Fact table keys, SSAS may report that it can not locate a default valued key in the dimension table. To restate, if an integer type foreign key in the fact table is valued as NULL, SSAS may report it can't locate "0" in the Dimension table.
If you have a NULL value in one of your Dimension table attributes, SSAS may be unable to find an attribute key in a "snowflake" type warehouse due to rows being excluded because they don't join up to the NULL. The worst part is that the NULL valued attribute may not be related (in an obvious manner) to the attribute it reports as being unable to be found...
==> Check for NULL values in your Fact tables and Dimension tables. If you can rearchitect your data warehouse to eliminate all NULL values - DO IT. (see Kimball Design Tip #43.)
#4 - Data Collation Inconsistencies
The collation of the dimensional database (or column) could be incorrect. Your RDBMS warehouse could be set to case-insensitivity, but the AS database is set to be case sensitive. Or it could be accent sensitivity that doesn't match. Be sure to check individual column collations in the RDBMS too... copying or restoring databases from servers with different collations can cause problems.
==> Change the collation of the RDBMS (server, database, and/or columns) and SSAS to match (case sentitivity and accent sensitivity).
#5 - Materialized Reference Dimensions Processed Incorrectly
If you're using a reference dimension (sometimes called a bridge dimension) in SSAS like the AdventureWorks demo does with the Reseller dimension in order to link it to the Geography dimension, be aware that it doesn't get resolved in Dimension Processing like you might think it should. Materialized Reference Dimensions are processed in Partition Processing.
For excellent details, see Alberto Ferrari's blog post.
=> Process your Partitions
And if it says it can't find the value "0" then it may actually be receiving a null in the fact table and converting it to zero before trying to find that value.
ReplyDeleteTYVM. I've updated the post.
ReplyDeleteHi Todd,
ReplyDeleteNice post about ssas attribute key error,I was about to light in this topic since this is one of the most frequent error asked in msdn,But you have assembled better very best!
Very nice post.Congrats!!!!
Thanks
Dibyant S Upadhyay
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteI got same problem, but nothing helped. Missing key exists in Dim table, all dimmnsions are fully processed before cube,there are no NULLS in Dim or Fact tables.
ReplyDeleteEvery time I try to process I get different key missing, but always from same Dim table. This particular dimension is huge, 1M+ rows (Dates...) , the only solution i got - rebuild cube from 0 with all dimensions. Cube actually works if I replace that dimension with one with lesser data, so it seems MSAS can not correctly process large dimensions and will occasionaly return less data then processed,resulting in error.
I highly doubt that SSAS has any issue with processing larger dimensions - but that's always a possibility. The above list is (obviously) not an exhaustive account of the possible causes and remedies for this behaviour.
ReplyDeleteWhat I would recommend is that you first post your issue on the SSAS forums on MSDN (http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/threads/). If you don't get a good resolution, please post your issue to SQL Connect (https://connect.microsoft.com/SQLServer).
If you do resolve the issue, please contact me with the results so I can update this page!
I had same problem.After I fixed the missing field in database table I still got "missing attribute" error. To fix - in DataSource View - I edited tables to be a View with "select top (0) .." in the two joining tables. Then processed individual dimensions seperately, then processed the whole cube. Then removed "top (0)", and repeated individual processing followed by cube processing. Fixed.
ReplyDeleteNice post!
ReplyDeleteThanks a lot!
It may be memory usage that is causing the errors (not large dimensions as such):
ReplyDeletehttp://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/268a4548-bbe7-40e0-b645-94e7e855e0aa
Nice work Todd, I also get the same problem when SSAS data source table refers to a SQL Server view defined as SELECT TOP 10000 * FROM table.
ReplyDeleteObviously I was doing this to speed the processing times during development. Eventually solved by introducing a table to store the top 10000 rows, and pointing SSAS at the table.
Cheers
Simon
Ad. #3. I just had the same problem. The solution was to set UnknownMember property in the dimension to 'Visible' and set null processing to UnknownMember in advanced properties of dimension usage (cube design).
ReplyDeleteThis may be due to orphaned child in one of the fact tables
ReplyDeleteThanks! It was #2 for me... never would've thought the order mattered.
ReplyDeleteThanks Todd for this helpful error description. I was very easy to identify error with your error description :)
ReplyDeleteWhile none of these solved my problem, this blog jogged my brainstorming a bit more so thanks! :-) My issue was that the keys for my referenced dimensions were reversed. #5 was the closest to my issue. The keys don't work as I thought. Once I switched the keys for the reference dimension attribute and intermediate dimension attribute and reprocessed the cube, voilà! :-)
ReplyDeleteI am running into the sam problem, but none of the solutions above solves my problem. There are no NULL's in my fact tables. The key it complains about not finding exists in both the dimension and the fact table. I process ALL dimensions before I process the fact table. None of the dimension attributes or fact table keys are NULL...and I am still getting this error!!
ReplyDeleteI have no idea what else to do...
I had the same problem. I dont know why by creating the cube, the dimension tables different names (underscore were transformed to blanks). I had to rename the Dimensionskeys and Dimension to the accurat naming and then processing each dimension.
ReplyDeleteI had the same issue, but #2 fixed it for me. Processed the dimensions first, then the cube. Thanks!
ReplyDeleteThis can also happen if your SSAs server is set to case sensitive and your database is not
ReplyDeleteI'll say thank you too.
ReplyDeleteMy dimensions and fact are based on simple Views.
However when I was receiving "Attribute can not be found" errors I was testing the "SELECT DISTINCT...." SQL with out any issues.
I even tried
SELECT * FROM view WHERE Attribute = 12345 with out any issue.
However....
SELECT count(*) FROM view WHERE column_name = 12345
failed and count(*) = 0
This was due to trying to speed up dimension loads while developing my cube by SELECT TOP 1000 * FROM view.
Thanks Todd!
Kind Regards, Jemahl
Excellent examples Jemahi - anything can trip you up - even taking (reasonable) shortcuts while developing. (I do that all the time.) Glad this reference (for me) is still helping real devs out there!
DeleteHi,
ReplyDeleteI'm having an "attribute key cannot be found" only when an insert is occuring in the SQL source database (the one that is feeding the OLAP cube).
The problem is that my database is live 24/7 (with active users day and night), so there will always be an insert that will make my SQL update job fail.
Nota: in a test environment where there are no insertions in my test DB while the cube is being updated,the cube is properly updated, without any "attribute key cannot be found".
PS: I'm an ASP.NET developer, not a DBA, nor a BI specialist, so this problem may be casual for database experts, but for me, it's a tricky issue.
Best Regards
Sam
Which table is the insert being made? Into the fact table? Do all of the keys on that inserted row have values in the dimension tables? Have the dimension tables been processed before your attempt to process the cube?
DeleteHi Todd,
DeleteThanks for your answer.
The insert is done into two fact tables.
Keys on that tables are required ('not null' in their OLTP counterparts).
Despite these integrity constraints, the log file shows the ID attribute was converted to an unknown member with it couldn't be found.
The other missing attribute is a foreign key to a table that is also 'not null' on the OLTP side.
I've just tried to process the dimension tables before processing the cube.
Sam
Let's be more clear with fake table names and columns, and where certain objects are - in the relational database, or in the SSAS database. Please point out exactly where my assumptions below are incorrect...
DeleteIn your relational database you have a fact table called FactA and a dimension table called DimB. In FactA, you have a PKA column which is the primary key of the fact table. In DimB, you have PKB which is the primary key of the dimension table. By being PKs, they are required to be not nullable. FactA also has a FKB column - the foreign key that connects facts to dimension rows. Sometimes the relational database does not have foreign keys enforced, sometimes it does. In either case, SSAS can still have a problem.
First, we'll talk about the more obvious problem when foreign keys are not enforced in the relational database. We'll add a row to DimB with PKB of 1000. Now add a row to FactA with a PKA of 1 and FKB of 1000. All is good. Now add a row to FactA with PKA of 2 and FKB of 2000. The relational database will not complain, it will allow this. But if you instruct SSAS to process the database or cube, it will report an error, or convert to unknown members. It should be pretty obvious why it's doing that.
Second, a more subtle problem - let's assume foreign keys are enforced in the relational database (or even if they aren't - you happen to be convinced that all rows in FactA have a FKB that refers to a row in DimB with the same value in PKB). You can still have a "missing attribute" problem. Let's start over and truncate FactA and DimB. Insert a row into DimB with PKB of 1000, then a row into FactA with PKA of 1 and FKB of 1000. Process the SSAS database, and all is well. Now add a row into DimB with PKB of 2000, then add a row into FactA with PKA of 2 and FKB of 2000. Now only process the SSAS cube. This will fail, or convert to unknown. Why? Because you forgot to process the dimension first. SSAS has its own set of tables separate from the relational database. When you process objects in SSAS, you're essentially doing an ETL from that relational database into SSAS' cube database. If you process the cube (facts) first, you are inserting a row into a SSAS fact table for which no row in the SSAS dimension table exists - you've violated SSAS' foreign key constraints. This problem will immediately go away if you process the dimension, then the fact. Or if you process the entire SSAS database (which causes SSAS to process all dimensions first, then cube measures).
Does that help?
Hi Todd,
DeleteThanks for these examples. As you guessed, the PK and FK are all 'NOT NULL'. So that left me with your second option.
I modified my SQL update job. It was a single step one. I made it 5 stepped, one for each dimension to process and the last step for the SASS cube processing itself.
Unfortunately, just as before, it works fine as long there are no insertions in the OLTP database, and fails when I start my import module that inserts data while the SQL job is running.
So what I'm tempted to do is replacing my two fact tables with a named query és it is the equivalent of an OLTP view. Does it make sense to have a named query that would simply be the result of an inner join on the tables that are causing the 'attribute key cannot be found' error?
If yes, do I simply have to delete the two dimensions and create a named query instead?
Sorry for asking such basic questions, but these are my first steps in the SSAS world.
Best Regards
Sam
Ah, OK, now I understand what's going on - you're putting a SSAS cube directly on top of an in-use OLTP database. That's not going to work well, nor is changing your DSV to use INNER JOINed named queries...because what will you be joining to? You'll be joining the "new" fact rows to equally "new" dimension rows that are not part of the cube either.
DeletePick up the Kimball book Data Warehouse LifeCycle Toolkit. You've now discovered one of the reasons that "ETL" tools exist. Your other options are to engineer a read-only copy of the database - a snapshot, a readable secondary server, something like that where you pause inserts and updates until your cube is processed.
Hi Todd,
DeleteThanks again for your answer and the book reference.
So when you talk about an ETL, as I'm in a Microsoft environment, does it mean I have to go for SSIS? Before I do, will SSIS also require to pause the import module that inserts new rows in my database ?
Best Regards
Sam
You don't HAVE to use SSIS. But you should be considering the practice of separating the "operational" data (the stuff your OLTP system uses), and the "analytic" data (the stuff your cubes process). In between, you have ETL to move/update information from the OLTP DB to the DW. Part of this ETL is to make sure the data you copied is internally consistent. If you copy the "invoice" table, then the "invoice line items" table, you want to make sure you don't have invoice line items that don't have an invoice because it got added by the OLTP while you were copying. This separation of DBs (an "operational" one and a "DW" one) helps in more ways than that. It also provides the opportunity to reformat the data into something the cube can more easily digest - but that's a deeper topic.
DeleteBottom line, you don't need SSIS. You can use T-SQL to copy tables (SELECT INTO/INSERT INTO). Or take a Database Snapshot (only in Enterprise). Or backup and restore to another DB.
I have to say this article just saved my life (figuratively, of course).
ReplyDeleteI had about 1000 rows getting skipped, and had a hunch it was because of the use of special nordic accents in the names. I implemented solution number 4 and it worked! Thank you so much :-)
I had this error, and it turned out it was an issue with the contributing view not being deterministic. Good database design would solve this much further up the chain, but in this case the data is calculated on the fly. The dimension has a key/value name pair, but the underlying data sometimes has multiple names for the same key (depending on source of the record). The view was choosing one at random and since SSAS issues separate queries for each dimension attribute, it was retrieving different data on the query for each attribute - e.g. first query would say Key=1 Name=A Company, second would say Key=1 Name=A Co.
ReplyDeleteI hope no one ever has this problem (e.g. hope you have good database design) but for a quick and dirty cube, it can come up.