Search Results

Search found 60932 results on 2438 pages for 'data operations'.

Page 483/2438 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • WCF - The maximum nametable character count quota (16384) has been exceeded while reading XML data.

    - by Jankhana
    I'm having a WCF Service that uses wsHttpBinding. The server configuration is as follows : <bindings> <wsHttpBinding> <binding name="wsHttpBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> At the client side I'm including the Service reference of the WCF-Service. It works great if I have limited functions say 90 Operation Contract in my IService but if add one more OperationContract than I'm unable to Update the Service reference nor i'm able to add that service reference. In this article it's mentioned that by changing those config files(i.e devenv.exe.config, WcfTestClient.exe.config and SvcUtil.exe.config) it will work but even including those bindings in those config files still that error pops up saying There was an error downloading 'http://10.0.3.112/MyService/Service1.svc/mex'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://10.0.3.112/MyService/Service1.svc/mex'. There is an error in XML document (1, 89549). The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 89549. If the service is defined in the current solution, try building the solution and adding the service reference again. Any idea how to solve this????

    Read the article

  • Why doesn't Data Driven Subscription in SSRS 2005 like my Stored Procedure?

    - by bert
    I'm trying to define a Data Driven Subscription for a report in SSRS 2005. In Step 3 of the set up you're asked for: " a command or query that returns a list of recipients and optionally returns fields used to vary delivery settings and report parameter values for each recipient" This I have written and it returns the data without a hitch. I press next and it rolls onto the next screen in the set up which has all the variables to set for the DDS and in each case it has an option to "Select Value From Database" I select this radio button and press the drop down. No fields are available to me. Now the only way I could vary the number of parameters returned by the SP was to have the SP write the SQL to an nvarchar variable and then at the end execute the variable as sql. I have tested this in the Management Studio and it returns the expected fields. I even named them after the fields in SSRS but the thing won't put the field names into the dropdowns. I've even taken the query body out of the Stored Proc, verified it in SSRS and then tried that. It doesn't work either. Can anyone shed any light into what I'm doing wrong?

    Read the article

  • A catalogue of Cassandra log messages: What is the correct interpretation?

    - by knorv
    The following is a complete catalogue of all log messages generated by Cassandra 0.6 when stress-testing a Cassandra installation over an extended period of time: AntiEntropyService: Sending AEService tree for (,) to: [] CassandraDaemon: Binding thrift service to localhost/N.N.N.N:N CassandraDaemon: Cassandra starting up... ColumnFamilyStore: has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='.../cassandra/commitlog/CommitLog-N.log', position=N) ColumnFamilyStore: Enqueuing flush of Memtable()@N CommitLog: Discarding obsolete commit log:CommitLogSegment(.../cassandra/commitlog/CommitLog-N.log) CommitLog: Log replay complete CommitLog: Replaying .../cassandra/commitlog/CommitLog-N.log, ... CommitLogSegment: Creating new commitlog segment .../cassandra/commitlog/CommitLog-N.log CompactionManager: Compacted to .../cassandra/data//-N-Data.db. N/N bytes for N keys. Time: Nms. CompactionManager: Compacting [org.apache.cassandra.io.SSTableReader(path='.../cassandra/data//-N-Data.db'), ...] DatabaseDescriptor: Auto DiskAccessMode determined to be mmap GCInspector: GC for ConcurrentMarkSweep: N ms, N reclaimed leaving N used; max is N GCInspector: GC for ParNew: N ms, N reclaimed leaving N used; max is N Memtable: Completed flushing .../cassandra/data//-N-Data.db Memtable: Writing Memtable()@N SSTable: Deleted .../cassandra/data//-N-Data.db SSTableDeletingReference: Deleted .../cassandra/data//-N-Data.db SSTableReader: Sampling index for .../cassandra/data//-N-Data.db StorageService: Starting up server gossip SystemTable: Saved ClusterName found: Test Cluster SystemTable: Saved ClusterName not found. Using Test Cluster SystemTable: Saved Token found: N SystemTable: Saved Token not found. Using N For each of the log messages listed - what is the correct interpretation of the log message?

    Read the article

  • iPhone: Memory leak when using NSOperationQueue...

    - by MacTouch
    Hi, I'm sitting here for at least half an hour to find a memory leak in my code. I just replaced an synchronous call to a (touch) method with an asynchronous one using NSOperationQueue. The Leak Inspector reports a memory leak after I did the change to the code. What's wrong with the version using NSOperationQueue? Version without a MemoryLeak -(NSData *)dataForKey:(NSString*)ressourceId_ { NSString *path = [self cachePathForKey:ressourceId_]; // returns an autoreleased NSString* NSString *cacheKey = [self cacheKeyForRessource:ressourceId_]; // returns an autoreleased NSString* NSData *data = [[self memoryCache] valueForKey:cacheKey]; if (!data) { data = [self loadData:path]; // returns an autoreleased NSData* if (data) { [[self memoryCache] setObject:data forKey:cacheKey]; } } [[self touch:path]; return data; } Version with a MemoryLeak (I do not see any) -(NSData *)dataForKey:(NSString*)ressourceId_ { NSString *path = [self cachePathForKey:ressourceId_]; // returns an autoreleased NSString* NSString *cacheKey = [self cacheKeyForRessource:ressourceId_]; // returns an autoreleased NSString* NSData *data = [[self memoryCache] valueForKey:cacheKey]; if (!data) { data = [self loadData:path]; // returns an autoreleased NSData* if (data) { [[self memoryCache] setObject:data forKey:cacheKey]; } } NSInvocationOperation *touchOp = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(touch:) object:path]; [[self operationQueue] addOperation:touchOp]; [touchOp release]; return data; } And of course, the touch method does nothing special too. Just change the date of the file. -(void)touch:(id)path_ { NSString *path = (NSString *)path_ NSFileManager *fm = [NSFileManager defaultManager]; if ([fm fileExistsAtPath:path]) { NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSDate date], NSFileModificationDate, nil]; [fm setAttributes: attributes ofItemAtPath:path error:nil]; } }

    Read the article

  • File Format Conversion to TIFF. Some issues???

    - by Nains
    I'm having a proprietary image format SNG( a proprietary format) which is having a countinous array of Image data along with Image meta information in seperate HDR file. Now I need to convert this SNG format to a Standard TIFF 6.0 Format. So I studied the TIFF format i.e. about its Header, Image File Directories( IFD's) and Stripped Image Data. Now I have few concerns about this conversion. Please assist me. SNG Continous Data vs TIFF Stripped Data: Should I convert SNG Data to TIFF as a continous data in one Strip( data load/edit time problem?) OR make logical StripOffsets of the SNG Image data. SNG Data Header uses only necessary Meta Information, thus while converting the SNG to TIFF, some information can’t be retrieved such as NewSubFileType, Software Tag etc. So this raises a concern that after conversion whether any missing directory information such as NewSubFileType, Software Tag etc is necessary and sufficient condition for TIFF File. Encoding of each pixel component of RGB Sample in SNG data: Here each SNG Image Data Strip per Pixel component is encoded as: Out^[i] := round( LineBuffer^[i * 3] * **0.072169** + LineBuffer^[i * 3 + 1] * **0.715160** + LineBuffer^[i * 3+ 2]* **0.212671**); Only way I deduce from it is that each Pixel is represented with 3 RGB component and some coefficient is multiplied with each component to make the SNG Viewer work RGB color information of SNG Image Data. (Developer who earlier work on this left, now i am following the trace :)) Thus while converting this to TIFF, the decoding the same needs to be done. This raises a concern that the how RBG information in TIFF is produced, or better do we need this information?. Please assist...

    Read the article

  • jQuery autocomplete: taking JSON input and setting multiple fields from single field

    - by Sly
    I am trying to get the jQuery autocomplete plugin to take a local JSON variable as input. Once the user has selected one option from the autocomplete list, I want the adjacent address fields to be autopopulated. Here's the JSON variable that declared as a global variable in the of the HTML file: var JSON_address={"1":{"origin":{"nametag":"Home","street":"Easy St","city":"Emerald City","state":"CA","zip":"9xxxx"},"destination":{"nametag":"Work","street":"Factory St","city":"San Francisco","state":"CA","zip":"94104"}},"2":{"origin":{"nametag":"Work","street":"Umpa Loompa St","city":"San Francisco","state":"CA","zip":"94104"},"destination":{"nametag":"Home","street":"Easy St","city":"Emerald City ","state":"CA","zip":"9xxxx"}}}</script> I want the first field to display a list of "origin" nametags: "Home", "Work". Then when "Home" is selected, adjacent fields are automatically populated with Street: Easy St, City: Emerald City, etc. Here's the code I have for the autocomplete: $(document).ready(function(){ $("#origin_nametag_id").autocomplete(JSON_address, { autoFill:true, minChars:0, dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.nametag, result:data[i].origin.nametag }; } return rows; } }).change(function(){ $("#street_address_id").autocomplete({ dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.street, result:data[i].origin.street }; } return rows; } }); }); }); So this question really has two subparts: 1) How do you get autocomplete to process the multi-dimensional JSON object? (When the field is clicked and text entered, nothing happens - no list) 2) How do you get the other fields (street, city, etc) to populate based upon the origin nametag sub-array? Thanks in advance!

    Read the article

  • Is this a good approach to address double-base64-encoding?

    - by Freiheit
    My software understands attachments, like PNGs attached to user records. These attachments are usually sent in from outside sources as a Base64 encoded string. The database stores whatever data it is given, Base64 encoded or not. When I serve up the attachment for download I do this: if (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } There is a potential for data that is double encoded. For instance the sender of a message had base64 encoded data, then encoded it again when building the message to send to me. I think the following code would address that circumstance: while (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } So if data is encoded multiple times, it would be decoded until its in its 'raw' state and then served up for download. Is this approach an acceptable way to address that problem? Ideally some sort of checking could happen at the edge when I receive attachment data, but that will take more time. This looping seems to be a faster way to do it. The 'Base64' library is Apache Commons: http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html I trust it to properly identify Base64 encoded data.

    Read the article

  • How can a data ellipse be superimposed on a ggplot2 scatterplot?

    - by Radu
    Hi, I have an R function which produces 95% confidence ellipses for scatterplots. The output looks like this, having a default of 50 points for each ellipse (50 rows): [,1] [,2] [1,] 0.097733810 0.044957994 [2,] 0.084433494 0.050337990 [3,] 0.069746783 0.054891438 I would like to superimpose a number of such ellipses for each level of a factor called 'site' on a ggplot2 scatterplot, produced from this command: > plat1 <- ggplot(mapping=aes(shape=site, size=geom), shape=factor(site)); plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) This is run on a dataset, called dflat which looks like this: site geom PC1.1 PC2.1 PC3.1 PC1.2 PC2.2 1 Buhlen 1259.5649 -0.0387975838 -0.022889782 0.01355317 0.008705276 0.02441577 2 Buhlen 653.6607 -0.0009398704 -0.013076251 0.02898955 -0.001345149 0.03133990 The result is fine, but when I try to add the ellipse (let's say for this one site, called "Buhlen"): > plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) + geom_path(data=subset(dflat, site="Buhlen"),mapping=aes(x=ELLI(PC1.1,PC2.1)[,1],y=ELLI(PC1.1,PC2.1)[,2])) I get an error message: "Error in data.frame(x = c(0.0977338099339815, 0.0844334944904515, 0.0697467834016782, : arguments imply differing number of rows: 50, 211 I've managed to fix this in the past, but I cannot remember how. It seems that geom_path is relying on the same points rather than plotting new ones. Any help would be appreciated.

    Read the article

  • Can't change pivot table's Access data source - bug in Excel 2000 SP3?

    - by Ron West
    I have a set of Excel 2000 SP3 worksheets that have Pivot Tables that get data from an Access 2000 SP3 database created by a contractor who left our company. Unfortunately, he did all his work on his private area on the company (Novell) network and now that he has left us, the drive spec has been deleted and is invalid. We were able to get the database files restored to our network area by our IT Service Desk people, but we now have to re-link everything to point to our group area instead of the now-nonexistent private area. If I follow the advice given elsewhere on this site (open wizard, click 'Back' to get to 'Step 2 of 3', click 'Get Data...' I get a message that the old filespec is an invalid path and I need to check that the path name is invalid and that I am connected to the server on which the file resides. I then click on OK and get a Login dialog with a 'Database...' button on the right. I click this and get a 'Select Database' dialog which allows me to choose the appropriate database in its correct new location. I then click OK, which takes me back to the 'Login' screen. I can confirm that it has accepted my new location by clicking on 'Database...' as before and the NEW location is still shown. So far so good - but if I then click on OK I get two unhelpful messages - first I get one saying that Excel 'Could not use '|'; file already in use.' - although no other files are in use. Clicking on OK takes me back to the 'Login' dialog. Clicking OK again gives me the same message as before telling me that the OLD filespec is invalid (as if I hadn't changed anything) - but clicking on the 'Database...' button shows that the correct (NEW) database location is still selected. Can anyone tell me a way of using VBA to change the link information without having to spend hours fighting the PivotTable Wizard - preferably similar to this way you update an Access Tabledef:- db.TableDefs(strLinkName).Connect = strNewLink db.TableDefs(strLinkName).RefreshLink Thanks!

    Read the article

  • What Qt container class to use for a sorted list?

    - by Dave
    Part of my application involves rendering audio waveforms. The user will be able to zoom in/out of the waveform. Starting at fully zoomed-out, I only want to sample the audio at the necessary internals to draw the waveform at the given resolution. Then, when they zoom in, asynchronously resample the "missing points" and provide a clearer waveform. (Think Google Maps.) I'm not sure the best data structure to use in Qt world. Ideally, I would like to store data samples sorted by time, but with the ability to fill-in points as needed. So, for example, the data points might initially look like: data[0 ms] = 10 data[10 ms] = 32 data[20 ms] = 21 ... But when they zoom in, I would get more points as necessary, perhaps: data[0 ms] = 10 data[2 ms] = 11 data[4 ms] = 18 data[6 ms] = 30 data[10 ms] = 32 data[20 ms] = 21 ... Note that the values in brackets are lookup values (milliseconds), not array indices. In .Net I might have used a SortedList<int, int>. What would be the best class to use in Qt? Or should I use a STL container?

    Read the article

  • Kohana 3 ORM: How to get data from pivot table? and all other tables for that matter

    - by zenna
    I am trying to use ORM to access data stored, in three mysql tables 'users', 'items', and a pivot table for the many-many relationship: 'user_item' I followed the guidance from http://stackoverflow.com/questions/1946357/kohana-3-orm-read-additional-columns-in-pivot-tables and tried $user = ORM::factory('user',1); $user->items->find_all(); $user_item = ORM::factory('user_item', array('user_id' => $user, 'item_id' => $user->items)); if ($user_item->loaded()) { foreach ($user_item as $pivot) { print_r($pivot); } } But I get the SQL error: "Unknown column 'user_item.id' in 'order clause' [ SELECT user_item.* FROM user_item WHERE user_id = '1' AND item_id = '' ORDER BY user_item.id ASC LIMIT 1 ]" Which is clearly erroneous because Kohana is trying to order the elements by a column which doesn't exist: user_item.id. This id doesnt exist because the primary keys of this pivot table are the foreign keys of the two other tables, 'users' and 'items'. Trying to use: $user_item = ORM::factory('user_item', array('user_id' => $user, 'item_id' => $user->items)) ->order_by('item_id', 'ASC'); Makes no difference, as it seems the order_by() or any sql queries are ignored if the second argument of the factory is given. Another obvious error with that query is that the item_id = '', when it should contain all the elements. So my question is how can I get access to the data stored in the pivot table, and actually how can I get access to the all items held by a particular user as I even had problems with that? Thanks

    Read the article

  • How Do I make a simple .htaccess internal redirect Catch All script while forwarding POST data?

    - by RB
    I just want to catch all requests and forward them internally to my catchall page with all POST data intact Catch all page: http://www.mydomain.com/addons/redirect/catch-all.php I've tried so many combinations, but my server doesn't want to redirect internally if I specify more than catch-all.php # Internally redirect all pages to "Catch" page Options +FollowSymLinks RewriteEngine on RewriteRule (.*) /addons/redirect/catch-all.php [L] Also, do I need [L] or is it useless for internal redirects? Then, what php code would I use to grab the POST data, use it, and finally PHP redirect the page to the originally requested page Would it be done just as normal by using $_POST['variable_name']; or something different? Then, how would I go about calling the originally requested page, so I can tell PHP to header location direct them to that page? Thanks! UPDATE: Ha sick, nevermind. The condition DOES work. Here's my code: # Internally redirect all pages to "Catch" page Options +FollowSymLinks RewriteEngine on RewriteCond %{REQUEST_URI} !^/robots.txt$ RewriteCond %{REQUEST_URI} !\.(gif¦jpe?g¦png¦css¦js¦pdf¦doc¦xml)$ RewriteCond %{REQUEST_URI} !^/addons/redirect/catch-all\.php$ RewriteRule (.*)$ /addons/redirect/catch-all.php?q=$1 [L] Thanks guys for the inspiration! Now time to get that PHP to work...

    Read the article

  • How can I stop an auto-generated Linq to SQL class from loading ALL data?

    - by Gary McGill
    DUPLICATE of http://stackoverflow.com/questions/2433422/how-can-i-stop-an-auto-generated-linq-to-sql-class-from-loading-all-data post answers there! I have an ASP.NET MVC project, much like the NerdDinner tutorial example. (I'm using MVC 2, but followed the NerdDinner tutorial in order to create it). As per the instructions in part 3 of the tutorial, I've created a Linq-to-SQL model of my database by creating a "Linq to SQL Classes" (.dbml) surface, and dropping my database tables onto it. The designer has automatically added relationships between the generated classes based on my database tables. Let's say that my classes are as per the NerdDinner example, so I have Dinner and RSVP tables, where each Dinner record is associated with many RSVP records - hence in the generated classes, the Dinner object has a RSVPs property which is a list of RSVP objects. My problem is this: it appears (and I'd be gladly proved wrong on this) that as soon as I access a Dinner object, it's loading all of the corresponding RSVP objects, even if I don't use the RSVPs member. First question: is this really the default behavior for the generated classes? In my particular situation, the object graph contains many more tables (which have an order of magnitude more records), and so this is disastrous behaviour - I'd be loading tons of data when all I want to do is show the details of a single parent record. Second question: are there any properties exposed through the designer UI that would let me modify this behavior? (I can't find any). Third question: I've seen a description of how to control the loading of related records in a DataContext by using a DataShape object associated with the DataContext. Is that what I'm meant to do, and if so are there any tutorials like the NerdDinner one that would show not only how to do it, but also suggest a 'pattern' for normal use?

    Read the article

  • Django: How can I delete a formset entry if one of it's data is blank?

    - by mkret
    Hi, I have the following scenario: I have a form with data that does not need translation and a formset with a textfield that should be translated into an undefined amount of languages. Both parts are bound to a model. Each translated text is kept in a model with a foreign key that binds it to the untranslatable data. Something like: class Person(models.Model): name = models.CharField(max_length=60) birth_date = models.DateField() class PersonBio(models.Model): person = models.ForeignKey(Person) locale = models.CharField(max_length=10) bio = models.TextField() Each form in the formset has 2 fields: A textfield (with the translated text) A locale field (with the language into which the text was translated) I've got it working with no problems until I tryed to change it's normal behaviour. I wanted to eliminate the need for the DELETE field by deleting an instance of the translated text if the textfield was left blank. I've googled quite a lot now and read the whole documentation for forms, formsets and model validation but had no luck. To be honest, I couldn't even think of a solution. Where should I implement this? On a Form clean() method? On the view? Somewhere in the Fieldset? Fieldset's save() method, maybe? I'll keep trying to find a way to do that, but any help/tip/clue is appreciated. Thanks in advance.

    Read the article

  • Silverlight - how to render image from non-visible data bound user control?

    - by MagicMax
    Hello! I have such situation - I'd like to build timeline control. So I have UserControl and ItemsControl on it (every row represents some person). The ItemsControl contains another ItemsControl as ItemsControl.ItemTemplate - it shows e.g. events for the person arranged by event's date. So it looks as some kind of grid with dates as a column headers and e.g. peoples as row headers. ........................|.2010.01.01.....2010.01.02.....2010.01.03 Adam Smith....|......[some event#1].....[some event#2]...... John Dow.......|...[some event#3].....[some event#4]......... I can have a lot of persons (ItemsControl #1 - 100-200 items) and a lot of events occured by some day (1-10-30 events per person in one day) the problem is that when user scrolls ItemsControl #1/#2 it happened toooo sloooooowwww due to a lot of elements should be rendered in one time (as I have e.g. a bit of text boxes and other elements in description of particular event) Question #1 - how can I improve it? May be somebody knows better way to build such user control? I have to mention that I'm using custom Virtual panel, based on some custom Virtual panel implementation found somewhere in internet... Question #2 - I'd like to make image with help of WriteableBitmap and render data bound control to image and to show image instead of a lot of elements. Problem is that I'm trying to render invisible data bound control (created in code behind) and it has actualWidth/Height equals to zero (so nothing rendered). How can I solve it? Thank you very much for you help!

    Read the article

  • Valgrind says "stack allocation," I say "heap allocation"

    - by Joel J. Adamson
    Dear Friends, I am trying to trace a segfault with valgrind. I get the following message from valgrind: ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C5: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) ==3683== ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C7: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) However, here's the offending line: /* allocate mating table */ age_dep_data->mtable = malloc (age_dep_data->geno * sizeof (double *)); if (age_dep_data->mtable == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); for (int j = 0; j < age_dep_data->geno; j++) { 131=> age_dep_data->mtable[j] = calloc (age_dep_data->geno, sizeof (double)); if (age_dep_data->mtable[j] == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); } What gives? I thought any call to malloc or calloc allocated heap space; there is no other variable allocated here, right? Is it possible there's another allocation going on (the offending stack allocation) that I'm not seeing? You asked to see the code, here goes: /* Copyright 2010 Joel J. Adamson <[email protected]> $Id: age_dep.c 1010 2010-04-21 19:19:16Z joel $ age_dep.c:main file Joel J. Adamson -- http://www.unc.edu/~adamsonj Servedio Lab University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280 This file is part of an investigation of age-dependent sexual selection. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with haploid. If not, see <http://www.gnu.org/licenses/>. */ #include "age_dep.h" /* global variables */ extern struct argp age_dep_argp; /* global error message variables */ char * nullmsg = "Null pointer: %i"; /* error message for conversions: */ char * errmsg = "Representation error: %s"; /* precision for formatted output: */ const char prec[] = "%-#9.8f "; const size_t age_max = AGEMAX; /* maximum age of males */ static int keep_going_p = 1; int main (int argc, char ** argv) { /* often used counters: */ int i, j; /* read the command line */ struct age_dep_args age_dep_args = { NULL, NULL, NULL }; argp_parse (&age_dep_argp, argc, argv, 0, 0, &age_dep_args); /* set the parameters here: */ /* initialize an age_dep_params structure, set the members */ age_dep_params_t * params = malloc (sizeof (age_dep_params_t)); if (params == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); age_dep_init_params (params, &age_dep_args); /* initialize frequencies: this initializes a list of pointers to initial frqeuencies, terminated by a NULL pointer*/ params->freqs = age_dep_init (&age_dep_args); params->by = 0.0; /* what range of parameters do we want, and with what stepsize? */ /* we should go from 0 to half-of-theta with a step size of about 0.01 */ double from = 0.0; double to = params->theta / 2.0; double stepsz = 0.01; /* did you think I would spell the whole word? */ unsigned int numparts = floor(to / stepsz); do { #pragma omp parallel for private(i) firstprivate(params) \ shared(stepsz, numparts) for (i = 0; i < numparts; i++) { params->by = i * stepsz; int tries = 0; while (keep_going_p) { /* each time through, modify mfreqs and mating table, then go again */ keep_going_p = age_dep_iterate (params, ++tries); if (keep_going_p == ERANGE) error (ERANGE, ERANGE, "Failure to converge\n"); } fprintf (stdout, "%i iterations\n", tries); } /* for i < numparts */ params->freqs = params->freqs->next; } while (params->freqs->next != NULL); return 0; } inline double age_dep_pmate (double age_dep_t, unsigned int genot, double bp, double ba) { /* the probability of mating between these phenotypes */ /* the female preference depends on whether the female has the preference allele, the strength of preference (parameter bp) and the male phenotype (age_dep_t); if the female lacks the preference allele, then this will return 0, which is not quite accurate; it should return 1 */ return bits_isset (genot, CLOCI)? 1.0 - exp (-bp * age_dep_t) + ba: 1.0; } inline double age_dep_trait (int age, unsigned int genot, double by) { /* return the male trait, a function of the trait locus, age, the age-dependent scaling parameter (bx) and the males condition genotype */ double C; double T; /* get the male's condition genotype */ C = (double) bits_popcount (bits_extract (0, CLOCI, genot)); /* get his trait genotype */ T = bits_isset (genot, CLOCI + 1)? 1.0: 0.0; /* return the trait value */ return T * by * exp (age * C); } int age_dep_iterate (age_dep_params_t * data, unsigned int tries) { /* main driver routine */ /* number of bytes for female frequencies */ size_t geno = data->age_dep_data->geno; size_t genosize = geno * sizeof (double); /* female frequencies are equal to male frequencies at birth (before selection) */ double ffreqs[geno]; if (ffreqs == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); /* do not set! Use memcpy (we need to alter male frequencies (selection) without altering female frequencies) */ memmove (ffreqs, data->freqs->freqs[0], genosize); /* for (int i = 0; i < geno; i++) */ /* ffreqs[i] = data->freqs->freqs[0][i]; */ #ifdef PRMTABLE age_dep_pr_mfreqs (data); #endif /* PRMTABLE */ /* natural selection: */ age_dep_ns (data); /* normalized mating table with new frequencies */ age_dep_norm_mtable (ffreqs, data); #ifdef PRMTABLE age_dep_pr_mtable (data); #endif /* PRMTABLE */ double * newfreqs; /* mutate here */ /* i.e. get the new frequency of 0-year-olds using recombination; */ newfreqs = rec_mating (data->age_dep_data); /* return block */ { if (sim_stop_ck (data->freqs->freqs[0], newfreqs, GENO, TOL) == 0) { /* if we have converged, stop the iterations and handle the data */ age_dep_sim_out (data, stdout); return 0; } else if (tries > MAXTRIES) return ERANGE; else { /* advance generations */ for (int j = age_max - 1; j < 0; j--) memmove (data->freqs->freqs[j], data->freqs->freqs[j-1], genosize); /* advance the first age-class */ memmove (data->freqs->freqs[0], newfreqs, genosize); return 1; } } } void age_dep_ns (age_dep_params_t * data) { /* calculate the new frequency of genotypes given additive fitness and selection coefficient s */ size_t geno = data->age_dep_data->geno; double w[geno]; double wbar, dtheta, ttheta, dcond, tcond; double t, cond; /* fitness parameters */ double mu, nu; mu = data->wparams[0]; nu = data->wparams[1]; /* calculate fitness */ for (int j = 0; j < age_max; j++) { int i; for (i = 0; i < geno; i++) { /* calculate male trait: */ t = age_dep_trait(j, i, data->by); /* calculate condition: */ cond = (double) bits_popcount (bits_extract(0, CLOCI, i)); /* trait-based fitness term */ dtheta = data->theta - t; ttheta = (dtheta * dtheta) / (2.0 * nu * nu); /* condition-based fitness term */ dcond = CLOCI - cond; tcond = (dcond * dcond) / (2.0 * mu * mu); /* calculate male fitness */ w[i] = 1 + exp(-tcond) - exp(-ttheta); } /* calculate mean fitness */ /* as long as we calculate wbar before altering any values of freqs[], we're safe */ wbar = gen_mean (data->freqs->freqs[j], w, geno); for (i = 0; i < geno; i++) data->freqs->freqs[j][i] = (data->freqs->freqs[j][i] * w[i]) / wbar; } } void age_dep_norm_mtable (double * ffreqs, age_dep_params_t * params) { /* this function produces a single mating table that forms the input for recombination () */ /* i is female genotype; j is male genotype; k is male age */ int i,j,k; double norm_denom; double trait; size_t geno = params->age_dep_data->geno; for (i = 0; i < geno; i++) { double norm_mtable[geno]; /* initialize the denominator: */ norm_denom = 0.0; /* find the probability of mating and add it to the denominator */ for (j = 0; j < geno; j++) { /* initialize entry: */ norm_mtable[j] = 0.0; for (k = 0; k < age_max; k++) { trait = age_dep_trait (k, j, params->by); norm_mtable[j] += age_dep_pmate (trait, i, params->bp, params->ba) * (params->freqs->freqs)[k][j]; } norm_denom += norm_mtable[j]; } /* now calculate entry (i,j) */ for (j = 0; j < geno; j++) params->age_dep_data->mtable[i][j] = (ffreqs[i] * norm_mtable[j]) / norm_denom; } } My current suspicion is the array newfreqs: I can't memmove, memcpy or assign a stack variable then hope it will persist, can I? rec_mating() returns double *.

    Read the article

  • What is usefulness of W3C's Semantic Data Extractor in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".

    Read the article

  • What is usefulness of W3C's "Semantic Data Extractor" in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". Is it possible to get pass every site with this tool? on one site i got this error No top-level heading (h1) found, no outline extracted. Is it necessary to have at least a H1 in any webpage?

    Read the article

  • Valueurl Binding On Large Arrays Causes Sluggish User Interface

    - by Hooligancat
    I have a large data set (some 3500 objects) that returns from a remote server via HTTP. Currently the data is being presented in an NSCollectionView. One aspect of the data is a path pack to the server for a small image that represents the data (think thumbnail for simplicity). Bindings works fantastically for the data that is already returned, and binding the image via a valueurl binding is easy to do. However, the user interface is very sluggish when scrolling through the data set - which makes me think that the NSCollectionView is retrieving all the image data instead of just the image data used to display the currently viewable images. I was under the impression that Cocoa controls were smart enough to only retrieve data for the information that is actually being output to the user interface through lazy loading. This certainly seems to be the case with NSTableView - but I could be misguided on this thought. Should valueurl binding act lazily and, moreover, should it act lazily in an NSCollectionView? I could create a caching mechanism (in fact I already have such a thing in place for another application - see my post here if you are interested http://stackoverflow.com/questions/1740209/populating-nsimage-with-data-from-an-asynchronous-nsurlconnection) but I really don't want to go this route if I don't have to for this specific implementation as the user could potentially change data sets often and may only want small sub-sets of the data. Any suggested approaches? Thanks!

    Read the article

  • PHP: How To Integrate HTML Purifier To Fileter User Submitted Data?

    - by TaG
    I have this script that collects data from users and I wanted to check their data for malicious code like XSS and SQL injections by using HTML Purifier http://htmlpurifier.org/ but how do I add it to my php form submission script? Here is my HTML purifier code require_once '../../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); // replace with your encoding $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); // replace with your doctype $purifier = new HTMLPurifier($config); $clean_html = $purifier->purify($dirty_html); Here is my PHP form submission code. if (isset($_POST['submitted'])) { // Handle the form. $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.*, profile.* FROM users INNER JOIN contact_info ON contact_info.user_id = users.user_id WHERE users.user_id=3"); $about_me = mysqli_real_escape_string($mysqli, $_POST['about_me']); $interests = mysqli_real_escape_string($mysqli, $_POST['interests']); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO profile (user_id, about_me, interests) VALUES ('$user_id', '$about_me', '$interests')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE profile SET about_me = '$about_me', interests = '$interests' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { // There was an error...do something about it here... print mysqli_error($mysqli); return; } }

    Read the article

  • PHP Encrypt/Decrypt with TripleDes, PKCS7, and ECB

    - by Brandon Green
    I've got my encryption function working properly however I cannot figure out how to get the decrypt function to give proper output. Here is my encrypt function: function Encrypt($data, $secret) { //Generate a key from a hash $key = md5(utf8_encode($secret), true); //Take first 8 bytes of $key and append them to the end of $key. $key .= substr($key, 0, 8); //Pad for PKCS7 $blockSize = mcrypt_get_block_size('tripledes', 'ecb'); $len = strlen($data); $pad = $blockSize - ($len % $blockSize); $data .= str_repeat(chr($pad), $pad); //Encrypt data $encData = mcrypt_encrypt('tripledes', $key, $data, 'ecb'); return base64_encode($encData); } Here is my decrypt function: function Decrypt($data, $secret) { $text = base64_decode($data); $data = mcrypt_decrypt('tripledes', $secret, $text, 'ecb'); $block = mcrypt_get_block_size('tripledes', 'ecb'); $pad = ord($data[($len = strlen($data)) - 1]); return substr($data, 0, strlen($data) - $pad); } Right now I am using a key of test and I'm trying to encrypt 1234567. I get the base64 output from encryption I'm looking for, but when I go to decrypt I get a blank response. I'm not very well versed in encryption/decryption so any help is much appreciated!!

    Read the article

  • ASP.NET configure data source is not returning anything?

    - by Greg McNulty
    I'm selecting table data of the current user: SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID) I have set a control to the unique user ID and it is getting the correct value: HTML: <asp:Label ID="userID" runat="server" Text="Labeluser"></asp:Label> C#: userID.Text = Membership.GetUser().ProviderUserKey.ToString(); I then use it in the where clause using the Configure Data Source window unique ID = control then controlID userID (fills in .text for me) I compile and run but nothing shows up where the table should be. Any suggestions? Here is the code it has created: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="ConfidenceLevel" HeaderText="ConfidenceLevel" SortExpression="ConfidenceLevel" /> <asp:BoundField DataField="LoveLevel" HeaderText="LoveLevel" SortExpression="LoveLevel" /> <asp:BoundField DataField="HappinessLevel" HeaderText="HappinessLevel" SortExpression="HappinessLevel" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionStringToDB %>" SelectCommand="SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID)"> <SelectParameters> <asp:ControlParameter ControlID="userID" Name="UserProfileID" PropertyName="Text" Type="Object" /> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • Caching and accessing configuration data in ASP.NET MVC app.

    - by Sosh
    I'm about to take a look at how to implement internationalisation for an ASP.NET MVC project. I'm looking at how to allow the user to change languages. My initial though is a dropdownlist containing each of the supported langauages. Whoever a few questions have come to mind: How to store the list of supported languages? (e.g. just "en", "English"; "fr", "French" etc.) An xml file? .config files? If I store this in a file I'll have to cache this (at startup I guess). So, what would be best, load the xml data into a list (somehow) and store this list in the System.Web.Cache? Application State? How then to load this data into the view (for display in a dropdown)? Give the view direct access to the cache? Just want to make sure I'm going in the right direction here... Thank you.

    Read the article

  • Oracle - Getting Select Count(*) from ... as an output parameter in System.Data.OracleClient

    - by cbeuker
    Greetings all, I have a question. I am trying to build a parametrized query to get me the number of rows from a table in Oracle. Rather simple. However I am an Oracle newbie.. I know in SQL Server you can do something like: Select @outputVariable = count(*) from sometable where name = @SomeOtherVariable and then you can set up an Output parameter in the System.Data.SqlClient to get the @outputVariable. Thinking that one should be able to do this in Oracle as well, I have the following query Select count(*) into :theCount from sometable where name = :SomeValue I set up my oracle parameters (using System.Data.OracleClient - yes I know it will be deprecated in .Net 4 - but that's what I am working with for now) as follows IDbCommand command = new OracleCommand(); command.CommandText = "Select count(*) into :theCount from sometable where name = :SomeValue"); command.CommandType = CommandType.Text; OracleParameter parameterTheCount = new OracleParameter(":theCount ", OracleType.Number); parameterTheCount .Direction = ParameterDirection.Output; command.Parameters.Add(parameterTheCount ); OracleParameter parameterSomeValue = new OracleParameter(":SomeValue", OracleType.VarChar, 40); parameterSomeValue .Direction = ParameterDirection.Input; parameterSomeValue .Value = "TheValueToLookFor"; command.Parameters.Add(parameterSomeValue ); command.Connection = myconnectionObject; command.ExecuteNonQuery(); int theCount = (int)parameterTheCount.Value; At which point I was hoping the count would be in the parameter parameterTheCount that I could readily access. I keep getting the error ora-01036 which http://ora-01036.ora-code.com tells me to check my binding in the sql statement. Am I messing something up in the SQL statement? Am I missing something simple elsewhere? I could just use command.ExecuteScaler() as I am only getting one item, and am probably going to end up using that, but at this point, curiosity has got the better of me. What if I had two parameters I wanted back from my query (ie: select max(ColA), min(ColB) into :max, :min.....) Thanks..

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >