Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 687/2340 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • primary key datatype in sql server database

    - by ooo
    i see after installing the asp.net membership tables, they use the data type "uniqueidentifier" for all of the primary key fields. I have been using "int" data type and doing increment by one on inserts. Is there any particular benefits to using the uniqueIdentifier data type compared to my current model of using int and auto increments on new inserts ?

    Read the article

  • Private API and SMS content URI's on Android

    - by Shadow
    Without accessing private API's to get Content URI's, etc. for SMS, how are we expected to query this data? I am currently in the process of writing my own SMS app and I want to stay as compatible as possible. Without storing the information myself in my own database (such that I can store the text messages so that other programs can access the data when/if they delete my app) and without using private API's how the heck are we suppose to query SMS data?

    Read the article

  • Perl TCP Server handling multiple Client connections

    - by Matt
    I'll preface this by saying I have minimal experience with both Perl and Socket programming, so I appreciate any help I can get. I have a TCP Server which needs to handle multiple Client connections simultaneously and be able to receive data from any one of the Clients at any time and also be able to send data back to the Clients based on information it's received. For example, Client1 and Client2 connect to my Server. Client2 sends "Ready", the server interprets that and sends "Go" to Client1. The following is what I have written so far: my $sock = new IO::Socket::INET { LocalHost => $host, // defined earlier in code LocalPort => $port, // defined earlier in code Proto => 'tcp', Listen => SOMAXCONN, Reuse => 1, }; die "Could not create socket $!\n" unless $sock; while ( my ($new_sock,$c_addr) = $sock->accept() ) { my ($client_port, $c_ip) = sockaddr_in($c_addr); my $client_ipnum = inet_ntoa($c_ip); my $client_host = ""; my @threads; print "got a connection from $client_host", "[$client_ipnum]\n"; my $command; my $data; while ($data = <$new_sock>) { push @threads, async \&Execute, $data; } } sub Execute { my ($command) = @_; // if($command) = "test" { // send "go" to socket1 print "Executing command: $command\n"; system($command); } I know both of my while loops will be blocking and I need a way to implement my accept command as a thread, but I'm not sure the proper way of writing it.

    Read the article

  • Simple C question

    - by Meko
    HI all. I am trying to make little program that reads data from file which has name of user and some data for that user. I am new on C , and how can i calculate this data for its user?line by line reading and adding each char in array? And how can I read line? is there any function?

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • How to access web.config connection string in C#?

    - by salvationishere
    I have a 32-bit XP running VS 2008 and I am trying to decrypt my connection string from my web.config file in my C# ASPX file. Even though there are no errors returned, my current connection string doesn't display contents of my selected AdventureWorks stored procedure. I entered it: C:\Program Files\Microsoft Visual Studio 9.0\VC>Aspnet_regiis.exe -pe "connectionStrings" -app "/AddFileToSQL2" Then it said "Succeeded". And my web.config section looks like: <connectionStrings> <add name="Master" connectionString="server=MSSQLSERVER;database=Master; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Persist Security Info=true; " providerName="System.Data.SqlClient" /> </connectionStrings> And my C# code behind looks like: string connString = ConfigurationManager.ConnectionStrings["AdventureWorksConnectionString2"].ConnectionString; Is there something wrong with the connection string in the web.config or C# code behind file?

    Read the article

  • Total Number of records required in paged .NET datagrid control

    - by sumitchauhan
    I am using a data grid and has bound a data source with it. I am trying to get the total number of records in the grid in overriden InitializePager method from pagedDataSource DataSourceCount. I thought DataSourceCount returns number of records from SelectCountMethod of ObjectDataSource, but DataSourceCount is giving me the page size and not the total number of records, whereas when I debug and see in SelectCountMethod it is returning correct number of total Records. I am not sure how to get the data from SelectCountMethod in DataGrid.

    Read the article

  • Reporting system for organization. Architecture advise required

    - by Andrew Florko
    We have several legacy & 3'd-party systems in organization that use several RDBMS vendors (& more specific data storages). Cross-system data reporting (as well as extra-reports that are not implemented in 3'd-party systems) is required with charts and population of templates (winword, excel). Reporting system is visioned as intranet web-site with custom user access to reports. We expect ~50 reports per day. Would you suggest to use BizTalk or any other integration software if commercial-department doesn't plan to buy anything expensive. Would you suggest to create centralized data storage for reporting that is populated regularly or rely on on-demand services that providers always up-to-request data. Thank you in advance!

    Read the article

  • optimistic and pessimistic locks

    - by billmce
    Working on my first php/Codeigniter project and I’ve scoured the ‘net for information on locking access to editing data and haven’t found very much information. I expect it to be a fairly regular occurrence for 2 users to attempt to edit the same form simultaneously. My experience (in the stateful world of BBx, filePro, and other RAD apps) is that the data being edited is locked using a pessimistic lock—one user has access to the edit form at the time. The second user basically has to wait for the first to finish. I understand this can be done using Ajax sending XMLHttpRequests to maintain a ‘lock’ database. The php world, lacking state, seems to prefer optimistic locking. If I understand it correctly it works like this: both users get to access the data and they each record a ‘before changes’ version of the data. Before saving their changes, the data is once again retrieved and compared the ‘before changes’ version. If the two versions are identical then the users changes are written. If they are different; the user is shown what has changed since he/she started editing and some mechanism is added to resolve the differences—or the user is shown a ‘Sorry, try again’ message. I’m interested in any experience people here have had with implementing both pessimistic and optimistic locking. If there are any libraries, tools, or ‘how-to’s available I’m appreciate a link. Thanks

    Read the article

  • javasrcipt asyncronous function

    - by Ben
    Hi there, i have a problem understanding how this could/should be solved. I have two functions. In the first function ( I call it loadData() ) I'm doing an asyncronous request to the server to load some data. In the second function ( saveData() ) I'm also doing an asyn request to the server to write some data. In the callback of this request I'm callin loadData() to refresh the data. Now the problem: In the saveData() function I want to wait for loadData() to be finished bevore I show a dialog (like alert('Data saved')) I guess this is a common problem, but I couldn't find the solution for it (if there is one..) A solution would be to make the requests syncronous, but the framework I'm using doesn't offer that and I hope there's a better solution.. Thanks to all!

    Read the article

  • Can you make a python script behave differently when imported than when run directly?

    - by futuraprime
    I often have to write data parsing scripts, and I'd like to be able to run them in two different ways: as a module and as a standalone script. So, for example: def parseData(filename): # data parsing code here return data def HypotheticalCommandLineOnlyHappyMagicFunction(): print json.dumps(parseData(sys.argv[1]), indent=4) the idea here being that in another python script I can call import dataparser and have access to dataParser.parseData in my script, or on the command line I can just run python dataparser.py and it would run my HypotheticalCommandLineOnlyHappyMagicFunction and shunt the data as json to stdout. Is there a way to do this in python?

    Read the article

  • Sync between local service with a thread and an activity

    - by Henrik
    Hello all, I'm trying to think of a way on how to sync in between a local service and the main activity. The local service has, A thread with a socket connection that could receive data at any time. A list/array with data. At any time the socket could receive data and add it to the list. The activity needs to display this data. So when the activity starts up it needs to attach or start the local service and fetch the list. It also needs to be notified if the list is updated. I think I would need to sync my list somehow so the local service does not add a new entry to it while the activity fetches the list when connecting to the service. Any ideas? Thanks.

    Read the article

  • Show only div of the product hovering in category grid with jQuery

    - by Dane
    On Magento, I'm trying to get avalable attributes per product in a new div (show/ hide onmouseover) as soon as I hover a product. Unfortunately, my jQuery code opens every div with the same name. I think, I need to do it with jQuery(this) but I tried it in a 1000 different ways, and it won't work. Maybe, somebody here can help me with a better code. jQuery(function() { jQuery('.slideDiv').hide().data('over', false); jQuery('#hover').hover(function() { jQuery('.slideDiv').fadeIn(); }, function() { // Check if mouse did not go over .dialog before hiding it again var timeOut = setTimeout(function() { if (!jQuery('.slideDiv').data('over')) { jQuery('.slideDiv').fadeOut(); clearTimeout(timeOut); } }, 100); }); // Set data for filtering on mouse events for #hover-here jQuery('.slideDiv').hover(function() { jQuery(this).data('over', true); }, function() { jQuery(this).fadeOut().data('over', false); }); }); The PHP just prints the attributes needed. <a href="#" id="hover">Custom Attributes</a> <div class="slideDiv"> <?php $attrs = $_product->getTypeInstance(true)->getConfigurableAttributesAsArray($_product); foreach($attrs as $attr) { if(0 == strcmp("shoe_size", $attr['attribute_code'])) { $options = $attr['values']; print "Größen:<br />"; foreach($options as $option) { print "{$option['store_label']}<br />"; } } } ?> </div> I added the script to [new link] http://jsfiddle.net/xsxfr/47/ so you can see there, that it is not working like this right now :(.

    Read the article

  • Anyway to find out the current Windows is in lock mode?

    - by David.Chu.ca
    I have a windows application written in VS 2005. The application makes queries against to sql database in a timer cycle every 2 minutes. If there any data changes, the window will be refreshed with new data. If the user leaves the window, the windows will be automatically locked after a while. There is no sense to keep querying data in ever 2 minutes when the windows is locked; therefore I would like to stop the query when lock is on so that the network data trafic will be reduced and also saves the current windows resources such as memory and CPUs. I am not sure if there is any way to find out the current windows is locked? Not sure if there is any Windows APIs for this purpose if no .Net classes available? My project is in .Net 2.0 and all users are in Windows XP.

    Read the article

  • How can chunks be allocated in a node.js stream in object mode all at once?

    - by Quentin Engles
    I can see how buffers, and strings can be sent as chunks, but I'm having a problem thinking about how streams can be dealt when working in object mode. Say I have a byte stream from an http request message. I want to take that message, parse, and then transform it into one big object. I already know how to parse the message. What I'm wondering is if the message is big so it has many chunks, but I want to make one object for the output how can I make sure the data event waits for the whole thing? Is this just a matter of not using the push method until the chunked data has finished being sent? That would then restrict the stream data output to a smaller object which I think I'm fine with for now. As an added condition the larger data will be reduced in size after the the transform.

    Read the article

  • Japanese character stored in SQL Server DB using ASP page that assumed it as ISO-8859-1 encoding

    - by Vishal Seth
    We have a legacy ASP based product that allowed the UI and Data languages of user groups to be configured according to their locations. CodePage and CharSet in ASP pages collecting data was set accordingly. I've noticed few instances in the SQL Server DB where users posted Japanese characters in the ASP page that assumes the oncoming stream to be of ISO-8859-1/Western and as a result, the data in the SQL table has gobbled up. While upgrading the client to our new product, I want to back-convert those "garbage" Japanese (in some instances Chinese) characters back to their actual form. Can I create some utility ASP page that would go through such data values and "fix" the wrongly-encoded strings and store everything back as utf-8 strings? In any case, I don't want to affect my French/Spanish/English characters that might be there as well.

    Read the article

  • Useless variable name in C struct type definition

    - by user1210233
    I'm implementing a linked list in C. Here's a struct that I made, which represents the linked list: typedef struct llist { struct lnode* head; /* Head pointer either points to a node with data or NULL */ struct lnode* tail; /* Tail pointer either points to a node with data or NULL */ unsigned int size; /* Size of the linked list */ } list; Isn't the "llist" basically useless. When a client uses this library and makes a new linked list, he would have the following declaration: list myList; So typing llist just before the opening brace is practically useless, right? The following code basically does the same job: typedef struct { struct lnode* head; /* Head pointer either points to a node with data or NULL */ struct lnode* tail; /* Tail pointer either points to a node with data or NULL */ unsigned int size; /* Size of the linked list */ } list;

    Read the article

  • Use content of fieldnames in query..

    - by rokdd
    Hi, i have three mysql tables: Table 456 id | binder | property1 1 | b | hello 2 | b | goodbye 3 | a | bonjour Table binder id | binder | tableid1 | tableid2 1 | a | 23 | 456 2 | b | 21 | 456 3 | c | 45 | 42 Table 21 id | property1 | data.. 1 | goodbye | data about goodbye.. 2 | ciao | data about ciao.. So first i want to select in binder the binder i need to get the tablesname where data is stored. So i need to select table by a fieldname in this case the fieldname is tableid1 and would have the content 21 so that i have to look in 21. AND it should be property 1 from table 456 and table 21 the same... i am using php and already tried with union and subquerys but it seems that i am to silly to prepare such query!

    Read the article

  • Advice on setting up a central db with master tables for web apps

    - by Dragn1821
    I'm starting to write more and more web applications for work. Many of these web applications need to store the same types of data, such as location. I've been thinking that it may be better to create a central db and store these "master" tables there and have each applicaiton access them. I'm not sure how to go about this. Should I create tables in my application's db to copy the data from the master table and store in the app's table (for linking with other app tables using foreign keys)? Should I use something like a web service to read the data from the master table instead of firing up a new db connection in my app? Should I forget this idea and just store the data within my app's db? I would like to have data such as the location central so I can go to one table and add a new location and the next time someone needs to select a location from one of the apps, the new one would be there. I'm using ASP.NET MVC 1.0 to build the web apps and SQL 2005 as the db. Need some advice... Thanks!

    Read the article

  • Eclipselink and update trigger on multiple access to the database

    - by Raven
    Hi, in my project I have a database which many clients connect to. Concurrent access and writing works well. The problem now is not to reload the data every second from the database to always have the current status of the data. Does Eclipselink provide a trigger mechanism on (automatically?) reload the data if the database is changed? How would one use this trigger? Thanks!

    Read the article

  • Forcibly clear memory in java

    - by MBennett
    I am writing an application in java that I care about being secure. After encrypting a byte array, I want to forcibly remove from memory anything potentially dangerous such as the key used. In the following snippet key is a byte[], as is data. SecretKeySpec secretKeySpec = new SecretKeySpec(key, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec); byte[] encData = cipher.doFinal(data, 0, data.length); Arrays.fill(key, (byte)0); As far as I understand, the last line above overwrites the key with 0s so that it no longer contains any dangerous data, but I can't find a way to overwrite or evict secretKeySpec or cipher similarly. Is there any way to forcibly overwrite the memory held by secretKeySpec and cipher, so that if someone were to be able to view the current memory state (say, via a cold boot attack), they would not get access to this information?

    Read the article

  • Is there a standard dialog for constructing an ADO.Net connection string (that is redistributable)?

    - by rathkopf
    I want to use a standard dialog to solicit user input of an ADO.net connection string. It is trivial to do for the oledb connection string as described here: MSDN Article on MSDASC.DataLinks().Prompt I've also found examples that use Microsoft.Data.ConnectionUI.dll and MicrosoftData.ConnectionUI.Dialog.dll from VS (HOWTO: Using the Choose Data Source dialog of Visual Studio 2005 from your own code). Unfortunately these DLLs are not licensed for redistribution. Is there a standard dialog for choosing a data source that can be distributed with my application?

    Read the article

  • Unable to diligently close the excel process running in memory

    - by NewAutoUser
    I have developed a VB.Net code for retrieving data from excel file .I load this data in one form and update it back in excel after making necessary modifications in data. This complete flow works fine but most of the times I have observed that even if I close the form; the already loaded excel process does not get closed properly. I tried all possible ways to close it but could not be able to resolve the issue. Find below code which I am using for connecting to excel and let me know if any other approach I may need to follow to resolve this issue. Note: I do not want to kill the excel process as it will kill other instances of the excel Dim connectionString As String connectionString = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=" & ExcelFilePath & "; Extended Properties=excel 8.0; Persist Security Info=False" excelSheetConnection = New ADODB.Connection If excelSheetConnection.State = 1 Then excelSheetConnection.Close() excelSheetConnection.Open(connectionString) objRsExcelSheet = New ADODB.Recordset If objRsExcelSheet.State = 1 Then objRsExcelSheet.Close() Try If TestID = "" Then objRsExcelSheet.Open("Select * from [" & ActiveSheet & "$]", excelSheetConnection, 1, 1) Else objRsExcelSheet.Open("Select Test_ID,Test_Description,Expected_Result,Type,UI_Element,Action,Data,Risk from [" & ActiveSheet & "$] WHERE TEST_Id LIKE '" & TestID & ".%'", excelSheetConnection, 1, 1) End If getExcelData = objRsExcelSheet Catch errObj As System.Runtime.InteropServices.COMException MsgBox(errObj.Message, , errObj.Source) Return Nothing End Try excelSheetConnection = Nothing objRsExcelSheet = Nothing

    Read the article

  • Rollback doesn't work in MySQLdb

    - by Anton Barycheuski
    I have next code ... db = MySQLdb.connect(host=host, user=user, passwd=passwd, db=db, charset='utf8', use_unicode=True) db.autocommit(False) cursor = db.cursor() ... for col in ws.columns[1:]: data = (col[NUM_ROW_GENERATION].value, 1, type_topliv_dict[col[NUM_ROW_FUEL].value]) fullgeneration_id = data[0] type_topliv = data[2] if data in completions_set: compl_id = completions_dict[data] else: ... sql = u"INSERT INTO completions (type, mark, model, car_id, type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period_url) VALUES (1, '%s', '%s', 0, %s, %s, %s, %s, '%s', '%s', '%s', '%s')" % (marks_dict[mark_id], models_dict[model_id], type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period.replace(' ', '_').replace(u'?.?.', 'nv') ) inserted_completion += cursor.execute(sql) cursor.execute("SELECT fullgeneration_id, type, type_topliv, id FROM completions where fullgeneration_id = %s AND type_topliv = %s" % (fullgeneration_id, type_topliv)) row = cursor.fetchone() compl_id = row[3] if is_first_car: deleted_compl_rus = cursor.execute("delete from compl_rus where compl_id = %s" % compl_id) for param, row_id in params: sql = u"INSERT INTO compl_rus (compl_id, modification, groupparam, param, paramvalue) VALUES (%s, '%s', '%s', '%s', %s)" % (compl_id, col[NUM_ROW_MODIFICATION].value, param[0], param[1], col[row_id].value) inserted_compl_rus += cursor.execute(sql) is_first_car = False db.rollback() print '\nSTATISTICS:' print 'Inserted completion:', inserted_completion print 'Inserted compl_rus:', inserted_compl_rus print 'Deleted compl_rus:', deleted_compl_rus ans = raw_input('Commit changes? (y/n)') db.close() I has manually deleted records from table and than run script two times. See https://dpaste.de/MwMa . I think, that rollback in my code doesn't work. Why?

    Read the article

  • How is fseek() implemented in the filesystem?

    - by pajton
    This is not a pure programming question, however it impacts the performance of programs using fseek(), hence it is important to know how it works. A little disclaimer so that it doesn't get closed. I am wondering how efficient it is to insert data in the middle of the file. Supposing I have a file with 1MB data and then I insert something at the 512KB offset. How efficient would that be compared to appending my data at the end of the file? Just to make the example complete lets say I want to insert 16KB of data. I understand the answer varies depending on the filesystem, however I assume that the techniques used in common filesystems are quite similar and I just want to get the right notion of it.

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >