Search Results

Search found 43986 results on 1760 pages for 'sql session state'.

Page 511/1760 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • Chrome logs me out of everything when I exit--tried cookie-related stuff already

    - by GreatBigBore
    I've been using Chrome very successfully for a long time. It has always kept me logged in to all my sites even after exiting the app. Recently it started logging me out of everything when I exit Chrome. I've fooled around with all the various advanced cookie settings, and I've cycled through the options hoping that Chrome just needed a wakeup call or a reset or something. I've also deleted all the cookies in case a corrupted one is confusing Chrome. Nothing works! I see cookies when I log in, but they all go away when I exit Chrome. I've searched all over the place and seen only the standard answers relating to resetting cookies, local data, sessions, that sort of thing. Any Chrome gurus out there, please send a telepathic message to my browser asking it to resume its previous excellent behavior. Alternatively, you could suggest other possible solutions.

    Read the article

  • Can I install applications to Remote Desktop Session Hosts via Group Policy?

    - by CC.
    I have a GPO that installs an application using the Software installation policy under Computer Configuration. I assign this GPO to the OU with our desktop/laptop computers, and my clients all install the software fine. I have another separate OU that covers our new Server 2012 RD session hosts. Previously, we've manually installed applications on our one Terminal Server. Now we have one Broker and two Session Hosts. I'd like to take my existing GPO, assign it to the session hosts, and have it install on the next reboot after a gpupdate so I'm sure that each is identically configured. Given this info: Should I be able to install applications via GPO to Session Hosts? Will Group Policy automatically install the applications as if I put the session host into /install mode, or do I need to do that?

    Read the article

  • Bash script throws, "syntax error near unexpected token `}'" when ran

    - by Tab00
    I am trying to write a script to monitor some battery statuses on a laptop running as a server. To accomplish this, I have already started to write this code: #! /bin/bash # A script to monitor battery statuses and send out email notifications #take care of looping the script for (( ; ; )) do #First, we check to see if the battery is present... if(cat /proc/acpi/battery/BAT0/state | grep 'present: *' == present: yes) { #Code to execute if battery IS present #No script needed for our application #you may add scripts to run } else { #if the battery IS NOT present, run this code sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is either missing, or removed. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #Second, we check into the current state of the battery if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charging') { #Code to execute if battery is charging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is charging. This MIGHT mean that something just happened" -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging, is it discharging? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: discharging') { #Code to run if the battery is discharging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is discharging. This shouldn't be happening. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging or discharging, is it charged? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charged') { #Code to run if battery is charged } done I'm pretty sure that most of the other stuff works correctly, but I haven't been able to try it because it will not run. whenever I try and run the script, this is the error that I get: ./BatMon.sh: line 15: syntax error near unexpected token `}' ./BatMon.sh: ` }' is the error something super simple like a forgotten semicolon? Thanks -Tab00

    Read the article

  • PowerBroker (Likewise-Open) + Ubuntu 13.04 -> 13.10 Upgrade

    - by JoBu1324
    I just upgraded Ubuntu from 13.04 to 13.10, and now I can't log into Active Directory; my system is integrated using PowerBroker Identity Services (PBIS), which used to be called Likewise-Open. So far I have identified the following symptoms: I am able to log in with my credentials via ssh. The screen goes black when attempting log into my account via the login screen. I've tried leaving the domain, purging PBIS, and re-installing the latest version of PBIS. I've been trying the troubleshooting section I found here, but I haven't had any success. The relevant portion of the auth.log Oct 22 09:30:26 mypc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "myusername" Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm-greeter:session): session closed for user lightdm Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session opened for user myusername by (uid=0) Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session closed for user myusername Oct 22 09:30:30 mypc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0) Oct 22 09:30:30 mypc systemd-logind[718]: New session c5 of user lightdm. Oct 22 09:30:30 mypc lightdm: pam_ck_connector(lightdm-greeter:session): nox11 mode, ignoring PAM_TTY :1 Oct 22 09:30:31 mypc dbus[535]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.129" (uid=110 pid=5139 comm="/usr/lib/x86_64-linux-gnu/indicator-keyboard-servi") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.39" (uid=0 pid=2024 comm="/usr/sbin/console-kit-daemon --no-daemon ") My .xsession-errors log Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. /usr/sbin/lightdm-session: 5: exec: init: not found

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    [Cross-Posting from ServerFault] I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • Unwanted debug session

    - by b3y4z1d
    Try this code I created,when I use it in Borland C++ and try the remove-function a debug session opens up and it points to a file called "xstring",and it is saying "EAccessViolation". it points to this line in the file: return (compare(0, _Mysize, _Right._Myptr(), _Right.size())); //--------------------------------------------------------------------------- #include<iostream> #include<string> #include<fstream> #include<list> #pragma hdrstop using namespace std; struct Mail{ string name; string ammount; }; //---------------------------Call_Functions----------------------------- void new_mail(list<Mail>& l); void show_mail(list<Mail> l); void remove(list<Mail>& l); //---------------------------------Menu-------------------------------------- #pragma argsused int main(int argc, char* argv[]) { list<Mail> mail; bool contin = true; char answ; do{ cout<<'\n'<<'\t'<<'\t'<<"Menu"<<endl <<'\t'<<'\t'<<"----"<<endl <<"1. New mail"<<endl <<"2. Show mail"<<endl <<"3. Remove mail"<<endl <<"4. Exit"<<endl<<endl; cin>>answ; cin.ignore(1000, '\n'); switch (answ) { case '1': new_mail(mail); break; case '2': show_mail(mail); break; case '3': remove(mail); break; case '4': exit(1); default: cout<<"Choice not recognized"; } } while(contin); return 0; } //------------------------------Functions------------------------------------- //------------------------------New_mail-------------------------------------- void new_mail(list<Mail>& l){ Mail p; cout<<endl<<"Type in the name of the new mail "; getline(cin, p.name); cout<<"Now type in the cost: "; getline(cin, p.ammount); l.push_back(p); } //------------------------------Show_Mail------------------------------------- void show_mail(list<Mail> l){ list<Mail>::iterator it; cout<<"\nAll mail:\n\n"; for (it = l.begin(); it != l.end(); it++) { cout<<(*it).name<<'\t'<<'\t'<<(*it).ammount<<endl; } } //------------------------------Remove---------------------------------------- void remove(list<Mail>& l){ list<Mail>::iterator it; string name; cout<<endl<<"What is the name of the mail you want to remove?: "; getline(cin, name); for (it = l.begin(); it != l.end(); it++) { if ((*it).name == name) { l.erase(it); } } } //------------------------------End----------------------------------------- Why does it show this error,and how can I solve it?

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • How to access Hibernate session from src folder?

    - by firnnauriel
    I would like to know how to access the Service and Domains properly in this sample class placed in src/java folder public class NewsIngestion implements Runnable { private String str; private int num; private Logger log = Logger.getLogger("grails.app"); private static boolean isRunning; private Thread t; private WorkerJobService jobService; private NewsService newsService; public NewsIngestion(String s, int n) { jobService = new WorkerJobService(); newsService = new NewsService(); str = s; num = n; isRunning = false; t = new Thread (this, "NewsIngestion"); } public void run () { while(isRunning){ try{ if(jobService.isJobEnabled("ConsumeFeedsJob") && jobService.lockJob("ConsumeFeedsJob")){ log.info("${this.class.name}: ConsumeFeedsJob started"); try{ // get all sources List sources = (List) InvokerHelper.invokeMethod(RSSFeed.class, "list", null); for(int i = 0; i < sources.size(); i++) { RSSFeed s = (RSSFeed) sources.get(i); // check if it's time to read the source int diff = DateTimeUtil.getSecondsDateDiff(s.getLastChecked(), new Date()); if(s.getLastChecked() == null || diff >= s.getCheckInterval()){ List keyword_list = (List) InvokerHelper.invokeMethod(Keyword.class, "list", null); for(int j = 0; j < keyword_list.size(); j++) { String keyword = (String) keyword_list.get(j); try{ newsService.ingestNewsFromSources(keyword, s); }catch(Exception e){ log.error("${this.class.name}: ${e}"); } log.debug("Completed reading feeds for ${keyword}."); log.info("${this.class.name}: Reading feeds for '${keyword}' (${s.feedName}) took ${Float.toString(st2.getDuration())} second(s)."); } s.setLastChecked(new Date()); InvokerHelper.invokeMethod(RSSFeed.class, "save", null); } log.info("${this.class.name}: Reading feeds for '${s.feedName}' for all keywords took ${Float.toString(st.getDuration())} second(s)."); } }catch(Exception e){ log.error("${this.class.name}: Exception: ${e}"); } log.info("${this.class.name}: ConsumeFeedsJob ended."); // unlock job jobService.unlockJob("ConsumeFeedsJob"); } log.info("alfred: success"); } catch (Exception e){ log.info("alfred exception: " + e.getMessage()); } try { Thread.sleep(5000); } catch (InterruptedException e) { log.info(e.getMessage()); } } } public void start() { if(t == null){ t = new Thread (this, "NewsIngestion"); } if(!isRunning){ isRunning = true; t.start(); } } public void stop() { isRunning = false; } public boolean isRunning() { return isRunning; } } I'm encountering this error message: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here Thanks.

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • A few problems with Delphi involving Mail Merge, SQL + Databases.

    - by Daniel
    My first problem is with mail merge. I have created a a Data File and a table, yet I am not able to fill my table with information from my Data File. The << just seems to be inserted after wherever the cursor is on the page, which is not where the table is. All that is entered into the actual table is a '59'. Therefore I think I either need to to change the code or be able to move the cursor. Here is the code I am currently using: wrdDoc.Tables.Add(wrdSelection.Range, ADOTable1.FieldCount, 3); wrdDoc.Tables.Item(1).Columns.Item(1).SetWidth(51,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(2).SetWidth(20,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(3).SetWidth(100,wdAdjustNone); // Set the shading on the first row to light gray wrdDoc.Tables.Item(1).Rows.Item(1).Cells .Shading.BackgroundPatternColorIndex := wdGray25; // BOLD the first row wrdDoc.Tables.Item(1).Rows.Item(1).Range.Bold := True; // Center the text in Cell (1,1) wrdDoc.Tables.Item(1).Cell(1,1).Range.Paragraphs.Alignment := wdAlignParagraphCenter; // Fill each row of the table with data wrdDoc.Tables.Item(1).Cell(1, 1).Range.InsertAfter('Time'); wrdDoc.Tables.Item(1).Cell(1, 2).Range.InsertAfter(''); wrdDoc.Tables.Item(1).Cell(1, 3).Range.InsertAfter('Teacher'); For Count := 1 to (ADOTable1.FieldCount - 1) do begin wrdDoc.Tables.Item(1).Cell((Count + 1), 1).Range.InsertAfter(wrdSelection.Range,'Time' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 2).Range.InsertAfter(wrdSelection.Range,'THonorific' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 3).Range.InsertAfter(wrdSelection.Range,'TSurname' + IntToStr(Count)); end; My second problem is that I do not know what the correct SQL syntax is for editing the name of a column in the database (I am using Delphi 7 and Microsoft Jet Engine if that makes a difference). The third problem is that when I add a new column to my database manually (which I need to do) I get a 'violation' error in one of my units when I activate an ADOTable. This only happens on one unit and it happens when I add a column with any name anywhere in the table. I know that is vague but I can't seem to narrow down the problem any further than that. If you could help with me with any of those it would be great. Thanks.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Android - Resuming application state - SL4A

    - by toyotajon93
    please dont harpoon me for a noob-ish question. I am working on an android application using SL4A, when my application starts it runs in the background while the script is being executed. I'm not sure where to start but each time I click my icon, it re-starts my application. I have tried using different launchmodes with nothing different happening. I'm thinking it has to do with the OnCreate code, and the setting of the notification. I need help saving my application state and then resuming on either re-click of icon or click from notification bar. I've tried everything had to turn here for help. I am not a pro at android programming by any means. Thanks guys, be gentle ;) Public void onCreate() { super.onCreate(); mInterpreterConfiguration = ((BaseApplication) getApplication()) .getInterpreterConfiguration(); } @Override public void onStart(Intent intent, final int startId) { super.onStart(intent, startId); String fileName = Script.getFileName(this); Interpreter interpreter = mInterpreterConfiguration .getInterpreterForScript(fileName); if (interpreter == null || !interpreter.isInstalled()) { mLatch.countDown(); if (FeaturedInterpreters.isSupported(fileName)) { Intent i = new Intent(this, DialogActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); i.putExtra(Constants.EXTRA_SCRIPT_PATH, fileName); startActivity(i); } else { Log .e(this, "Cannot find an interpreter for script " + fileName); } stopSelf(startId); return; } // Copies script to internal memory. fileName = InterpreterUtils.getInterpreterRoot(this).getAbsolutePath() + "/" + fileName; File script = new File(fileName); // TODO(raaar): Check size here! if (!script.exists()) { script = FileUtils.copyFromStream(fileName, getResources() .openRawResource(Script.ID)); } copyResourcesToLocal(); // Copy all resources if (Script.getFileExtension(this) .equals(HtmlInterpreter.HTML_EXTENSION)) { HtmlActivityTask htmlTask = ScriptLauncher.launchHtmlScript(script, this, intent, mInterpreterConfiguration); mFacadeManager = htmlTask.getRpcReceiverManager(); mLatch.countDown(); stopSelf(startId); } else { mProxy = new AndroidProxy(this, null, true); mProxy.startLocal(); mLatch.countDown(); ScriptLauncher.launchScript(script, mInterpreterConfiguration, mProxy, new Runnable() { @Override public void run() { mProxy.shutdown(); stopSelf(startId); } }); } } RpcReceiverManager getRpcReceiverManager() throws InterruptedException { mLatch.await(); if (mFacadeManager==null) { // Facade manage may not be available on startup. mFacadeManager = mProxy.getRpcReceiverManagerFactory() .getRpcReceiverManagers().get(0); } return mFacadeManager; } @Override protected Notification createNotification() { Notification notification = new Notification(R.drawable.script_logo_48, this.getString(R.string.loading), System.currentTimeMillis()); // This contentIntent is a noop. PendingIntent contentIntent = PendingIntent.getService(this, 0, new Intent(), 0); notification.setLatestEventInfo(this, this.getString(R.string.app_name), this.getString(R.string.loading), contentIntent); notification.flags = Notification.FLAG_ONGOING_EVENT; return notification; }

    Read the article

  • What is the best approach in SQL to store multi-level descriptions?

    - by gime
    I need a new perspective on how to design a reliable and efficient SQL database to store multi-level arrays of data. This problem applies to many situations but I came up with this example: There are hundreds of products. Each product has an undefined number of parts. Each part is built from several elements. All products are described in the same way. All parts would require the same fields to describe them (let's say: price, weight, part name), all elements of all parts also have uniform design (for example: element code, manufacturer). Plain and simple. One element may be related to only part, and each part is related to one product only. I came up with idea of three tables: Products: -------------------------------------------- prod_id prod_name prod_price prod_desc 1 hoover 120 unused next Parts: ---------------------------------------------------- part_id part_name part_price part_weight prod_id 3 engine 10 20 1 and finally Elements: --------------------------------------- el_id el_code el_manufacturer part_id 1 BFG12 GE 3 Now, select a desired product, select all from PARTS where prod_id is the same, and then select all from ELEMENTS where part_id matches - after multiple queries you've got all data. I'm just not sure if this is the right approach. I've got also another idea, without ELEMENTS table. That would decrease queries but I'm a bit afraid it might be lame and bad practice. Instead of ELEMENTS table there are two more fields in the PARTS table, so it looks like this: part_id, part_name, part_price, part_weight, prod_id, part_el_code, part_el_manufacturer they would be text type, and for each part, information about elements would be stored as strings, this way: part_el_code | code_of_element1; code_of_element2; code_of_element3 part_el_manufacturer | manuf_of_element1; manuf_of_element2; manuf_of_element3 Then all we need is to explode() data from those fields, and we get arrays, easy to display. Of course this is not perfect and has some limitations, but is this idea ok? Or should I just go with the first idea? Or maybe there is a better approach to this problem? It's really hard to describe it in few words, and that means it's hard to search for answer. Also, understanding the principles of designing databases is not that easy as it seems.

    Read the article

  • Visual State Manager in WPF not working for me

    - by Román
    Hi In a wpf project I have this XAML code <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions" x:Class="WpfApplication1.MainWindow" xmlns:vsm="clr-namespace:System.Windows;assembly=WPFToolkit" x:Name="Window" Title="MainWindow" Width="640" Height="480"> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="VisualStateGroup"> <vsm:VisualState x:Name="Loading"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button1" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </vsm:VisualState> <VisualState x:Name="Normal"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <Grid x:Name="LayoutRoot"> <Grid.Resources> <ControlTemplate x:Key="loadingAnimation"> <Image x:Name="content" Opacity="1"> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <DrawingGroup> <GeometryDrawing Brush="Transparent"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,1,1"/> </GeometryDrawing.Geometry> </GeometryDrawing> <DrawingGroup> <DrawingGroup.Transform> <RotateTransform x:Name="angle" Angle="0" CenterX="0.5" CenterY="0.5"/> </DrawingGroup.Transform> <GeometryDrawing Geometry="M0.9,0.5 A0.4,0.4,90,1,1,0.5,0.1"> <GeometryDrawing.Pen> <Pen Brush="Green" Thickness="0.1"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="Green" Geometry="M0.5,0 L0.7,0.1 L0.5,0.2"/> </DrawingGroup> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> <ControlTemplate.Triggers> <Trigger Property="Visibility" Value="Visible"> <Trigger.EnterActions> <BeginStoryboard x:Name="animation"> <Storyboard> <DoubleAnimation From="0" To="359" Duration="0:0:1.5" RepeatBehavior="Forever" Storyboard.TargetName="angle" Storyboard.TargetProperty="Angle"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="animation"/> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Grid.Resources> <Grid.ColumnDefinitions> <ColumnDefinition MinWidth="76.128" Width="Auto"/> <ColumnDefinition MinWidth="547.872" Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="0.05*"/> <RowDefinition Height="0.95*"/> </Grid.RowDefinitions> <Button x:Name="button" Margin="0,0,1,0.04" Width="100" Content="Load" d:LayoutOverrides="Height" Click="Button_Click"/> <Button x:Name="button1" HorizontalAlignment="Left" Margin="0,0,0,0.04" Width="100" Content="Stop" Grid.Column="1" d:LayoutOverrides="Height" Click="Button2_Click" Visibility="Collapsed"/> <Control x:Name="control" Margin="10" Height="100" Grid.Row="1" Grid.ColumnSpan="2" Width="100" Template="{DynamicResource loadingAnimation}" Visibility="Collapsed"/> </Grid> </Window> and the following code behind on the window public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private void Button1_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Loading", true); } private void Button2_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Normal", true); } } However, when I click the first button (button1) the state change is not being triggered. What am I doing wrong? Thanks in advance

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Making hovor state of hidden list visible when page is active

    - by Joel
    Hi guys, One day I hope to not be such a newbie on this stuff, but some of this feels so insurmountable sometimes! OK. I have a nav bar with hidden li items that are visible when hovered over. Here's the live site: http://www.rattletree.com Here's the code for the nav: <ul id="navbar"> <li id="iex"><a href="index.php">About Rattletree</a></li> <li id="upcomgshows"><a href="upcomingshows.php">Calendar</a></li> <li id="sods"><a href="#">Sights &amp; Sounds</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png" alt="" /></li> <li class="innerlist"><a href="/playlist.m3u" target="_blank" onclick="javascript:BatmoAudioPop('Rattletree Marimba',this.href,'1'); return false">Listen</a></li> <li class="innerlist"><a href="/new_pictures.php">Photos</a></li> <li class="innerlist"><a href="/video.php">Video</a></li> <li class="innerlist"><a href="/press.php">Press</a></li> </ul> </li> <li id="bookin"><a href="#">Contact</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png" alt="" /></li> <li class="innerlist"><a href="/booking.php">Booking Info</a></li> <li class="innerlist"><a href="/media.php">Media Inquiries</a></li> </ul> </li> <li id="ste"> <a href="/sounds.php">Store</a></li> <li id="instrumes"><a href="/instruments.php">The Instruments</a></li> <li id="classe"><a href="classes.php">Workshops</a></li> </ul> css: div#navbar2 { background-color:#546F8B; border-bottom:1px solid #546F8B; border-top:1px solid #000000; display:inline-block; position:relative; width:100%; } div#navbar2 ul#navbar { color:#FFFFFF; font-family:Arial,Helvetica,sans-serif; font-size:16px; letter-spacing:1px; margin:10px 0; padding:0; white-space:nowrap; } div#navbar2 ul#navbar li ul.innerlist { color:#000000; display:none; position:relative; z-index:20; } div#navbar2 ul#navbar li { display:inline; list-style-type:none; margin:0; padding:0; position:relative; } Now it's a bit tricky what I want to do: If a user navigates to one of the innerlist pages, I'd like that innerlist ul to remain visible (with the specific li displaying the hovered state). Now I think I could figure that out on my own, but you can see on the live page that if the user is on a page from the innerlist and that list was visible, then if they hovered over the other nav tab, then those innerlists would overlap. This is a problem. Hopefully that last sentence makes sense! In short: I need to keep the inner list of the active page displaying, BUT if the user hovers over another nav button WITH it's own inner list, then the live innerlist needs to disappear. Clear as mud?

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • How to make a form using ajax, onchange event, reload to SAME page

    - by user1348220
    I've been studying this for a while and I'm not sure if I'm going about this the right way because every form I setup according to examples, it doesn't do what I need. I need to setup a form that will: set session when you select from dropdown menu not reload/refresh page (i've read that using AJAX solves this) submit and stay on SAME page (confused because most AJAX examples send it to different process.php page which is supposedly "invisible" but it doesn't "stay" on the same page, it redirects. Basically, client selects quantity of 1 to 10. If they select "2"... it does NOT reload the page.. but it DOES set a session[quantity]=2. Should be simple... but do I POST to same page as form? or POST to different page and it somehow redirects? Also, one test I did it kept pasting my "echo session[quantity]" down the page like 7, 2, 3, 5, etc. etc. each time instead of replacing it. I would paste code but it's all over the place and I'm hoping for direction on which methods to use. Feel I need to start all over again. Edit: trying to add code below but can't seem to paste it properly. <? ob_start();?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <?php session_start(); ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Submit Form with out refreshing page Tutorial</title> <!-- JavaScript --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript" > $(function() { $(".submit").click(function() { var gender = $("#gender").val(); var dataString = '&gender=' + gender; if(gender=='') { $('.success').fadeOut(200).hide(); $('.error').fadeOut(200).show(); } else { $.ajax({ type: "POST", url: "join.php", data: dataString, success: function(){ $('.success').fadeIn(200).show(); $('.error').fadeOut(200).hide(); } }); } return false; }); }); </script> <style type="text/css"> body{ } .error{ color:#d12f19; font-size:12px; } .success{ color:#006600; font-size:12px; } </style> </head> <body id="public"> <div style="height:30px"></div> <div id="container"> <div style="height:30px"></div> <form method="post" name="form"> <select id="gender" name="gender"> <option value="">Gender</option> <option value="male">Male</option> <option value="female">Female</option> </select> <div> <input type="submit" value="Submit" class="submit"/> <span class="error" style="display:none"> Please Enter Valid Data</span> <span class="success" style="display:none"> Your gender is <?php echo $_SESSION['gender'];?></span> </div> </form> <div style="height:20px"></div> </div><!--container--> </body> </html> <? ob_flush(); ?> and here is my page where the POST goes called join.php (called that in example so I went with it for now) <?php session_start(); if($_POST) { $gender = $_POST['gender']; $_SESSION['gender'] = $gender; } else { } ?>

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >