Search Results

Search found 18546 results on 742 pages for 'evaluation order'.

Page 94/742 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Confused Why I am getting C1010 error?

    - by bluepixel
    I have three files: Main, slist.h and slist.cpp can be seen at http://forums.devarticles.com/c-c-help-52/confused-why-i-am-getting-c2143-and-c1010-error-259574.html I'm trying to make a program where main reads the list of student names from a file (roster.txt) and inserts all the names in a list in ascending order. This is the full class roster list (notCheckedIN). From here I will read all students who have come to write the exams, each checkin will transfer their name to another list (in ascending order) called present. The final product is notCheckedIN will contain a list of all those students that did not write the exam and present will contain the list of all students who wrote the exam Main File: // Exam.cpp : Defines the entry point for the console application. #include "stdafx.h" #include "iostream" #include "iomanip" #include "fstream" #include "string" #include "slist.h" using namespace std; void OpenFile(ifstream&); void GetClassRoster(SortList&, ifstream&); void InputStuName(SortList&, SortList&); void UpdateList(SortList&, SortList&, string); void Print(SortList&, SortList&); const string END_DATA = "EndData"; int main() { ifstream roster; SortList notCheckedIn; //students present SortList present; //student absent OpenFile(roster); if(!roster) //Make sure file is opened return 1; GetClassRoster(notCheckedIn, roster); //insert the roster list into the notCheckedIn list InputStuName(present, notCheckedIn); Print(present, notCheckedIn); return 0; } void OpenFile(ifstream& roster) //Precondition: roster is pointing to file containing student anmes //Postcondition:IF file does not exist -> exit { string fileName = "roster.txt"; roster.open(fileName.c_str()); if(!roster) cout << "***ERROR CANNOT OPEN FILE :"<< fileName << "***" << endl; } void GetClassRoster(SortList& notCheckedIN, ifstream& roster) //Precondition:roster points to file containing list of student last name // && notCheckedIN is empty //Postcondition:notCheckedIN is filled with the names taken from roster.txt in ascending order { string name; roster >> name; while(roster) { notCheckedIN.Insert(name); roster >> name; } } void InputStuName(SortList& present, SortList& notCheckedIN) //Precondition: present list is empty initially and notCheckedIN list is full //Postcondition: repeated prompting to enter stuName // && notCheckedIN will delete all names found in present // && present will contain names present // && names not found in notCheckedIN will report Error { string stuName; cout << "Enter last name (Enter EndData if none to Enter): "; cin >> stuName; while(stuName!=END_DATA) { UpdateList(present, notCheckedIN, stuName); } } void UpdateList(SortList& present, SortList& notCheckedIN, string stuName) //Precondition:stuName is assigned //Postcondition:IF stuName is present, stuName is inserted in present list // && stuName is removed from the notCheckedIN list // ELSE stuName does not exist { if(notCheckedIN.isPresent(stuName)) { present.Insert(stuName); notCheckedIN.Delete(stuName); } else cout << "NAME IS NOT PRESENT" << endl; } void Print(SortList& present, SortList& notCheckedIN) //Precondition: present and notCheckedIN contains a list of student Names present/not present //Postcondition: content of present and notCheckedIN is printed { cout << "Candidates Present" << endl; present.Print(); cout << "Candidates Absent" << endl; notCheckedIN.Print(); } Header File: //Specification File: slist.h //This file gives the specifications of a list abstract data type //List items inserted will be in order //Class SortList, structured type used to represent an ADT using namespace std; const int MAX_LENGTH = 200; typedef string ItemType; //Class Object (class instance) SortList. Variable of class type. class SortList { //Class Member - components of a class, can be either data or functions public: //Constructor //Post-condition: Empty list is created SortList(); //Const member function. Compiler error occurs if any statement within tries to modify a private data bool isEmpty() const; //Post-condition: == true if list is empty // == false if list is not empty bool isFull() const; //Post-condition: == true if list is full // == false if list is full int Length() const; //Post-condition: size of list void Insert(ItemType item); //Precondition: NOT isFull() && item is assigned //Postcondition: item is in list && Length() = Length()@entry + 1 void Delete(ItemType item); //Precondition: NOT isEmpty() && item is assigned //Postcondition: // IF items is in list at entry // first occurance of item in list is removed // && Length() = Length()@entry -1; // ELSE // list is not changed bool isPresent(ItemType item) const; //Precondition: item is assigned //Postcondition: == true if item is present in list // == false if item is not present in list void Print() const; //Postcondition: All component of list have been output private: int length; ItemType data[MAX_LENGTH]; void BinSearch(ItemType, bool&, int&) const; }; Source File: //Implementation File: slist.cpp //This file gives the specifications of a list abstract data type //List items inserted will be in order //Class SortList, structured type used to represent an ADT #include "iostream" #include "slist.h" using namespace std; // int length; // ItemType data[MAX_SIZE]; //Class Object (class instance) SortList. Variable of class type. SortList::SortList() //Constructor //Post-condition: Empty list is created { length=0; } //Const member function. Compiler error occurs if any statement within tries to modify a private data bool SortList::isEmpty() const //Post-condition: == true if list is empty // == false if list is not empty { return(length==0); } bool SortList::isFull() const //Post-condition: == true if list is full // == false if list is full { return (length==(MAX_LENGTH-1)); } int SortList::Length() const //Post-condition: size of list { return length; } void SortList::Insert(ItemType item) //Precondition: NOT isFull() && item is assigned //Postcondition: item is in list && Length() = Length()@entry + 1 // && list componenet are in ascending order of value { int index; index = length -1; while(index >=0 && item<data[index]) { data[index+1]=data[index]; index--; } data[index+1]=item; length++; } void SortList:elete(ItemType item) //Precondition: NOT isEmpty() && item is assigned //Postcondition: // IF items is in list at entry // first occurance of item in list is removed // && Length() = Length()@entry -1; // && list components are in ascending order // ELSE data array is unchanged { bool found; int position; BinSearch(item,found,position); if (found) { for(int index = position; index < length; index++) data[index]=data[index+1]; length--; } } bool SortList::isPresent(ItemType item) const //Precondition: item is assigned && length <= MAX_LENGTH && items are in ascending order //Postcondition: true if item is found in the list // false if item is not found in the list { bool found; int position; BinSearch(item,found,position); return (found); } void SortList::Print() const //Postcondition: All component of list have been output { for(int x= 0; x<length; x++) cout << data[x] << endl; } void SortList::BinSearch(ItemType item, bool found, int position) const //Precondition: item contains item to be found // && item in the list is an ascending order //Postcondition: IF item is in list, position is returned // ELSE item does not exist in the list { int first = 0; int last = length -1; int middle; found = false; while(!found) { middle = (first+last)/2; if(data[middle]<item) first = middle+1; else if (data[middle] > item) last = middle -1; else found = true; } if(found) position = middle; } I cannot get rid of the C1010 error: fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source? Is there a way to get rid of this error? When I included "stdafx.h" I received the following 32 errors (which does not make sense to me why because I referred back to my manual on how to use Class method - everything looks a.ok.) Error 1 error C2871: 'std' : a namespace with this name does not exist c:\..\slist.h 6 Error 2 error C2146: syntax error : missing ';' before identifier 'ItemType' c:\..\slist.h 8 Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 8 Error 4 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 8 Error 5 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 30 Error 6 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 34 Error 7 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 43 Error 8 error C2146: syntax error : missing ';' before identifier 'data' c:\..\slist.h 52 Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 52 Error 10 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\..\slist.h 52 Error 11 error C2061: syntax error : identifier 'ItemType' c:\..\slist.h 53 Error 12 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 41 Error 13 error C2761: 'void SortList::Insert(void)' : member function redeclaration not allowed c:\..\slist.cpp 41 Error 14 error C2059: syntax error : ')' c:\..\slist.cpp 41 Error 15 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 45 Error 16 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 45 Error 17 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 57 Error 18 error C2761: 'void SortList:elete(void)' : member function redeclaration not allowed c:\..\slist.cpp 57 Error 19 error C2059: syntax error : ')' c:\..\slist.cpp 57 Error 20 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 65 Error 21 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 65 Error 22 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 79 Error 23 error C2761: 'bool SortList::isPresent(void) const' : member function redeclaration not allowed c:\..\slist.cpp 79 Error 24 error C2059: syntax error : ')' c:\..\slist.cpp 79 Error 25 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 83 Error 26 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 83 Error 27 error C2065: 'data' : undeclared identifier c:\..\slist.cpp 95 Error 28 error C2146: syntax error : missing ')' before identifier 'item' c:\..\slist.cpp 98 Error 29 error C2761: 'void SortList::BinSearch(void) const' : member function redeclaration not allowed c:\..\slist.cpp 98 Error 30 error C2059: syntax error : ')' c:\..\slist.cpp 98 Error 31 error C2143: syntax error : missing ';' before '{' c:\..\slist.cpp 103 Error 32 error C2447: '{' : missing function header (old-style formal list?) c:\..\slist.cpp 103

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • Using Lambdas for return values in Rhino.Mocks

    - by PSteele
    In a recent StackOverflow question, someone showed some sample code they’d like to be able to use.  The particular syntax they used isn’t supported by Rhino.Mocks, but it was an interesting idea that I thought could be easily implemented with an extension method. Background When stubbing a method return value, Rhino.Mocks supports the following syntax: dependency.Stub(s => s.GetSomething()).Return(new Order()); The method signature is generic and therefore you get compile-time type checking that the object you’re returning matches the return value defined by the “GetSomething” method. You could also have Rhino.Mocks execute arbitrary code using the “Do” method: dependency.Stub(s => s.GetSomething()).Do((Func<Order>) (() => new Order())); This requires the cast though.  It works, but isn’t as clean as the original poster wanted.  They showed a simple example of something they’d like to see: dependency.Stub(s => s.GetSomething()).Return(() => new Order()); Very clean, simple and no casting required.  While Rhino.Mocks doesn’t support this syntax, it’s easy to add it via an extension method. The Rhino.Mocks “Stub” method returns an IMethodOptions<T>.  We just need to accept a Func<T> and use that as the return value.  At first, this would seem straightforward: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(factory()); return opts; } And this would work and would provide the syntax the user was looking for.  But the problem with this is that you loose the late-bound semantics of a lambda.  The Func<T> is executed immediately and stored as the return value.  At the point you’re setting up your mocks and stubs (the “Arrange” part of “Arrange, Act, Assert”), you may not want the lambda executing – you probably want it delayed until the method is actually executed and Rhino.Mocks plugs in your return value. So let’s make a few small tweaks: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(default(T)); // required for Rhino.Mocks on non-void methods opts.WhenCalled(mi => mi.ReturnValue = factory()); return opts; } As you can see, we still need to set up some kind of return value or Rhino.Mocks will complain as soon as it intercepts a call to our stubbed method.  We use the “WhenCalled” method to set the return value equal to the execution of our lambda.  This gives us the delayed execution we’re looking for and a nice syntax for lambda-based return values in Rhino.Mocks. Technorati Tags: .NET,Rhino.Mocks,Mocking,Extension Methods

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • Must go through Windows Boot Loader to get to Grub

    - by Zach
    I just installed a fresh copy of Precise alongside Windows 7. I have to separate 750GB hard drives; /dev/sda holds the Windows partitions and /dev/sdb holds the Ubuntu partitions. Other than that, these are fresh installs of both Windows 7 and Ubuntu 12.04. Whenever I boot, Grub doesn't load, instead it goes to a black screen with a single blinking (horizontal bar) cursor in the top right corner. However, if I boot, hit escape right as the BIOS/POST screen finishes up, see the Windows Boot Loader and hit escape to make it go back to the BIOS screen. After the BIOS screen, grub shows up and everything functions normally; I can boot into Ubuntu or Win7. I don't want to have to do the Escape, Escape, Wait, Boot trick every time. I have no idea what would be wrong or what information I could give you guys to help diagnose. I have run a sudo update-grub and it found everything normally. I tried adding nomodeset flag in the /etc/default/grub line GRUB_CMDLINE_LINUX_DEFAULT which searching around made me think might work. Thoughts on what I could do to fix this? EDIT: I've tried changing the boot order so that both drives in the BIOS (both are labeled as "Internal HDD") have had a try booting first. I think the problem may be that every time I boot, the BIOS boot order is different... and I have to reset it. It seems to not be stable... but I'm not sure how to go about fixing that either. The machine has both traditional BIOS and UEFI. It came standard in "Legacy" mode; so it is currently set to boot through Legacy mode. I've reinstalled Ubuntu now, and now if I hit escape at the end of the BIOS/POST startup screen, it takes me to GRUB menu. Otherwise it automatically loads Windows. It seems like GRUB is now the acting bootloader, it just doesn't automatically start that unless I ask it to open a bootloader. In my other machines, it has always automatically started at the end of BIOS/POST. EDIT2: Using gparted, I just looked at my partitions, it would seem that my linux-swap partition is currently flagged as the boot partition for my Ubuntu install. I currently only have 2 partitions: one of "ext4" with a mount point of "/" and flag " "; and the "linux-swap" with mount point " " and flag "boot." If I change the boot flag to be on "/," it does not reliably solve the problem. After 10 boots: 2 Booted successfully to GRUB 5 Booted directly to Windows 7 3 booted to the black screen with the cursor and hung there Further research makes me think this is an issue of the BIOS not reliably booting hard drives in the same order or not finding both hard drives. If I ask it to create a "boot menu" sometimes it has 2 entries for "Internal HDD," sometimes 1. Also the list it creates changes order every time I bring it up; so it is not following a consistent boot sequence. Will report back if this is not an issue with GRUB.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • sort django queryset by latest instance of a subset of related model

    - by rsp
    I have an Order model and order_event model. Each order_event has a foreignkey to order. so from an order instance i can get: myorder.order_event_set. I want to get a list of all orders but i want them to be sorted by the date of the last event. A statement like this works to sort by the latest event date: queryset = Order.objects.all().annotate(latest_event_date=Max('order_event__event_datetime')).order_by('latest_event_date') However, what I really need is a list of all orders sorted by latest date of A SUBSET OF EVENTS. For example my events are categorized into "scheduling", "processing", etc. So I should be able to get a list of all orders sorted by the latest scheduling event. This django doc (https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filter-and-exclude) shows how I can get the latest schedule event using a filter but this excludes orders without a scheduling event. I thought I could combine the filtered queryset with a queryset that includes back those orders that are missing a scheduling event...but I'm not quite sure how to do this. I saw answers related to using python list but it would be much more useful to have a proper django queryset (ie for a view with pagination, etc.)

    Read the article

  • Why am I getting a " instance has no attribute '__getitem__' " error?

    - by Kevin Yusko
    Here's the code: class BinaryTree: def __init__(self,rootObj): self.key = rootObj self.left = None self.right = None root = [self.key, self.left, self.right] def getRootVal(root): return root[0] def setRootVal(newVal): root[0] = newVal def getLeftChild(root): return root[1] def getRightChild(root): return root[2] def insertLeft(self,newNode): if self.left == None: self.left = BinaryTree(newNode) else: t = BinaryTree(newNode) t.left = self.left self.left = t def insertRight(self,newNode): if self.right == None: self.right = BinaryTree(newNode) else: t = BinaryTree(newNode) t.right = self.right self.right = t def buildParseTree(fpexp): fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in '+-*/)': currentTree.setRootVal(eval(i)) parent = pStack.pop() currentTree = parent elif i in '+-*/': currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: print "error: I don't recognize " + i return eTree def postorder(tree): if tree != None: postorder(tree.getLeftChild()) postorder(tree.getRightChild()) print tree.getRootVal() def preorder(self): print self.key if self.left: self.left.preorder() if self.right: self.right.preorder() def inorder(tree): if tree != None: inorder(tree.getLeftChild()) print tree.getRootVal() inorder(tree.getRightChild()) class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) def main(): parseData = raw_input( "Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) " ) tree = buildParseTree(parseData) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + preorder(tree)) print( "The post order is: ", + inorder(tree)) main() And here is the error: Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) ( 1 + 2 ) Traceback (most recent call last): File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 108, in main() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 102, in main tree = buildParseTree(parseData) File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 46, in buildParseTree currentTree = currentTree.getLeftChild() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 15, in getLeftChild return root[1] AttributeError: BinaryTree instance has no attribute '__getitem__'

    Read the article

  • curl POST to RESTful services

    - by Sashikiran Challa
    Hello All, There are a lot of questions on Stackoverflow about curl but I could not figure out what is that I am doing what I am not supposed to. I am trying to call a RESTful service that I had written using Jersey API and am trying to POST an xml string to it and I get HTTP 415 error which is supposed to be a Media Type error. Here in my shell script call to 1st service: abc=curl http://gf...:8080/InChItoD/inchi/3dstructure?InChIstring=$inchi echo $abc (this works fine the output that it returns is given below.) Posting this xml string to second service def= curl -d $abc -H "Content-Type:text/xml" http://gf...:8080/XML2G/xml3d/gssinput I get the following error: ... ... HTTP Status 415 Status report message description.The server refused this request because the request entity is in a format not supported by the requested resource for the requested method ().Apache Tomcat/6.0.26 This is a sample of xml string I am trying to POST <?xml version="1.0"?><molecule xmlns="http://www.xml-cml.org/schema"> <atomArray> <atom id="a1" elementType="N" formalCharge="1" x3="0.997963" y3="-0.002882" z3="-0.004222"/> <atom id="a2" elementType="H" x3="2.024650" y3="-0.002674" z3="0.004172"/> <atom id="a3" elementType="H" x3="0.655444" y3="0.964985" z3="0.004172"/> <atom id="a4" elementType="H" x3="0.649003" y3="-0.496650" z3="0.825505"/> <atom id="a5" elementType="H" x3="0.662767" y3="-0.477173" z3="-0.850949"/> </atomArray> <bondArray> <bond atomRefs2="a1 a2" order="1"/> <bond atomRefs2="a1 a3" order="1"/> <bond atomRefs2="a1 a4" order="1"/> <bond atomRefs2="a1 a5" order="1"/> </bondArray></molecule> Thanks in advance

    Read the article

  • Methods for deep cloning objects in C#

    - by Anry
    I have a class: public class Order : BaseEPharmObject { public Order() { } public virtual Guid Id { get; set; } public virtual DateTime Created { get; set; } public virtual DateTime? Closed { get; set; } public virtual OrderResult OrderResult { get; set; } public virtual decimal Balance { get; set; } public virtual Customer Customer { get; set; } public virtual Shift Shift { get; set; } public virtual Order LinkedOrder { get; set; } public virtual User CreatedBy { get; set; } public virtual decimal TotalPayable { get; set; } public virtual IList<Transaction> Transactions { get; set; } public virtual IList<Payment> Payments { get; set; } } I need to clone objects of class Order. How to implement a deep copy right in the base class?

    Read the article

  • I assume Row_Number doesn’t act only on rows of the window frame

    - by AspOnMyNet
    a) Quote is taken from http://www.postgresql.org/docs/current/static/tutorial-window.html for each row, there is a set of rows within its partition called its window frame. Many (but not all) window functions act only on the rows of the window frame, rather than of the whole partition. By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume Row_Number doesn’t act only on rows of the window frame, but instead always act on all rows of a partition? b) By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume that is only true for those window functions that act only on rows of the window frame ( thus above quote isn't true for ROW_NUMBER() function )? c) http://www.postgresql.org/docs/current/static/tutorial-window.html talks about PostgreSQL 8.4’s Windowing functions. Is everything in that article also true for Sql Server 2008’s Windowing functions thanx

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Code Trivia: optimize the code for multiple nested loops

    - by CodeToGlory
    I came across this code today and wondering what are some of the ways we can optimize it. Obviously the model is hard to change as it is legacy, but interested in getting opinions. Changed some names around and blurred out some core logic to protect. private static Payment FindPayment(Order order, Customer customer, int paymentId) { Payment payment = Order.Payments.FindById(paymentId); if (payment != null) { if (payment.RefundPayment == null) { return payment; } if (String.Compare(payment.RefundPayment, "refund", true) != 0 ) { return payment; } } Payment finalPayment = null; foreach (Payment testpayment in Order.payments) { if (testPayment.Customer.Name != customer.Name){continue;} if (testPayment.Cancelled) { continue; } if (testPayment.RefundPayment != null) { if (String.Compare(testPayment.RefundPayment, "refund", true) == 0 ) { continue; } } if (finalPayment == null) { finalPayment = testPayment; } else { if (testPayment.Value > finalPayment.Value) { finalPayment = testPayment; } } } if (finalPayment == null) { return payment; } return finalPayment; } Making this a wiki so code enthusiasts can answer without worrying about points.

    Read the article

  • apache: cgi links lead to a "you have chosen to open foo.cgi", although scriptalias is set

    - by Xiong Chiamiov
    Following this guide on CentOS 5.2, just getting nagios set up for the first time. The main page shows up just fine, but when I try to view any of the pages that should be generated by a cgi process, firefox prompts me to save the .cgi instead, so apache's obviously not understanding that it needs to run the cgi and get back some html from it. The odd thing is, though, that, as far as I can tell, apache should be running these files as cgi. nagios.conf: # SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER # Last Modified: 11-26-2005 # # This file contains examples of entries that need # to be incorporated into your Apache web server # configuration file. Customize the paths, etc. as # needed to fit your system. ScriptAlias /nagios/cgi-bin/ "/usr/lib/nagios/cgi/" # SSLRequireSSL Options +ExecCGI AddHandler cgi-script .cgi AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-user Alias /nagios "/usr/share/nagios/" # SSLRequireSSL DirectoryIndex index.php Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-use Either the ScriptAlias directive or ExecCGI option should be triggering this, but neither of them seems to have any effect. This config file is being parsed by apache, because if I move it out of conf.d, /nagios gives a 404. The .cgi files are indeed in the /nagios/cgi-bin/ directory, so I didn't specify the incorrect directory. Searching seemed to only provide people who had difficulty with permissions, which is not the issue here. This seems to me to be a pretty basic thing, but even with the excellent apache documentation, I'm at a bit of a loss (been using cherokee too much lately :) ).

    Read the article

  • Oracle command hangs when using view for "WHERE x IN..." subquery

    - by Calvin Fisher
    I'm working on a web service that fetches data from an oracle data source in chunks and passes it back to an indexing/search tool in XML format. I'm the C#/.NET guy, and am kind of fuzzy on parts of Oracle. Our Oracle team gave us the following script to run, and it works well: SELECT ROWID, [columns] FROM [table] WHERE ROWID IN ( SELECT ROWID FROM ( SELECT ROWID FROM [table] WHERE ROWID > '[previous_batch_last_rowid]' ORDER BY ROWID ) WHERE ROWNUM <= 10000 ) ORDER BY ROWID 10,000 rows is an arbitrary but reasonable chunk size and ROWID is sufficiently unique for our purposes to use as a UID since each indexing run hits only one table at a time. Bracketed values are filled in programmatically by the web service. Now we're going to start adding views to the indexing, each of which will union a few separate tables. Since ROWID would no longer function as a unique identifier, they added a column to the views (VIEW_UNIQUE_ID) that concatenates the ROWIDs from the component tables to construct a UID for each union. But this script does not work, even though it follows the same form as the previous one: SELECT VIEW_UNIQUE_ID, [columns] FROM [view] WHERE VIEW_UNIQUE_ID IN ( SELECT VIEW_UNIQUE_ID FROM ( SELECT VIEW_UNIQUE_ID FROM [view] WHERE ROWID > '[previous_batch_last_view_unique_id]' ORDER BY VIEW_UNIQUE_ID ) WHERE ROWNUM <= 10000 ) ORDER BY VIEW_UNIQUE_ID It hangs indefinitely with no response from the Oracle server. I've waited 20+ minutes and the SQLTools dialog box indicating a running query remains the same, with no progress or updates. I've tested each subquery independently and each works fine and takes a very short amount of time (<= 1 second), so the view itself is sound. But as soon as the inner two SELECT queries are added with "WHERE VIEW_UNIQUE_ID IN...", it hangs. Why doesn't this query work for views? In what important way are they not interchangeable here?

    Read the article

  • Magento MAGMI: Product attributes (custom options) not showing up in import

    - by Rodgers and Hammertime
    When importing a CSV into Magento with the MAGMI importing tool, I am unable to import Custom Options (as in size: smalee/medium/large). The import manages to put in the basic products, but the Custom Options don't transfer accross. By custom options I mean the fields Title, Input Type, Is Required, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order and so on ... Found in the custom options menu... Even using the example CSV from the MAGMI SourceForge Wiki: sku,name,description,price,Size:drop_down:1 T-Shirt1,T-Shirt,A T-Shirt,5.00,Small|Medium|Large T-Shirt2,T-Shirt2,Another T-Shirt,6.00,XS|S|M|L|XL ...it fails to import the attributes. So i'm simply using MAGMI with the supplied example data from SourceForge on a blank magento product list, and it doesn't transfer properly. Can anyone shed any light on what might be wrong? I am using Magento ver. 1.6.1.0 if that changes anything. Thanks.

    Read the article

  • jquery Ajax sortable list stops being sortable when Ajax call updates the list html

    - by Trevor
    Hi, I am using jquery 1.4.2 and trying to achieve the following: 1 - function call that sends a value to a php page to add/remove an item 2 - returns html list of the items 3 - list should still be sortable 4 - save (serialise list) onclick My full WIP is located here [http://www.chittak.co.uk/test4/index_nw3.php][1] I tried to delegate from the level above the UL but I could get this to work $("#construnctionstage").delegate('ul li', 'click', function(){ The initial list is sortable, when you click add/remove the ajax function returns a new list with the a number of items, BUT I am doing something wrong as the alert message continues to work while the list is no longer sortable. $(document).ready(function(){ $('ul').delegate('li', 'click', function(){ alert('You clicked on an li element!'); /*$("#test-list").sortable({ handle : '.handle', update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.php?"+order); } });*/ }).sortable({ handle : '.handle', update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.php?"+order); } }); }); <div id="construnctionstage"> <ul id="test-list"> <li id="listItem_1">

    Read the article

  • Limiting choices from an intermediary ManyToMany junction table in Django

    - by Matthew Rankin
    Background I've created three Django models—Inventory, SalesOrder, and Invoice—to model items in inventory, sales orders for those items, and invoices for a particular sales order. Each sales order can have multiple items, so I've used an intermediary junction table—SalesOrderItems—using the through argument for the ManyToManyField. Also, partial billing of a sales orders is allowed, so I've created a ForeignKey in the Invoice model related to the SalesOrder model, so that a particular sales order can have multiple invoices. Here's where I deviate from what I've normally seen. Instead of relating the Invoice model to the Item model via a ManyToManyField, I've related the Invoice model to the SalesOrderItem intermediary junction table through the intermediary junction table InvoiceItem. I've done this because it better models reality—our invoices are tied to sales orders and can only include items that are tied to that sales order as opposed to any item in inventory. I will admit that it does seem strange having the intermediary junction table of a ManyToManyField related to the intermediary junction table of another ManyToManyField. Question How can I limit the choices available for the invoice_items in the Invoice model to just the sales_order_items of the SalesOrder model for that particular Invoice? (I tried using limit_choices_to= {'sales_order': self.invoice.sales_order}) as part of the item = models.ForeignKey(SalesOrderItem) in the InvoiceItem model, but that didn't work. Am I correct in thinking that limiting the choices for the invoice_items should be handled in the model instead of in a form? Code class Item(models.Model): item_num = models.SlugField(unique=True) default_price = models.DecimalField(max_digits=10, decimal_places=2, blank=True, null=True) class SalesOrderItem(models.Model): item = models.ForeignKey(Item) sales_order = models.ForeignKey('SalesOrder') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class SalesOrder(models.Model): customer = models.ForeignKey(Party) so_num = models.SlugField(max_length=40, unique=True) sales_order_items = models.ManyToManyField(Item, through=SalesOrderItem) class InvoiceItem(models.Model): item = models.ForeignKey(SalesOrderItem) invoice = models.ForeignKey('Invoice') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class Invoice(models.Model): invoice_num = models.SlugField(max_length=25) sales_order = models.ForeignKey(SalesOrder) invoice_items = models.ManyToManyField(SalesOrderItem, through='InvoiceItem')

    Read the article

  • Windows 7 64 Bit - ODBC32 - Legacy App Problem

    - by Arturo Caballero
    Good day StackOverFlowlers, I´m a little stuck (really stuck) with an issue with a legacy application on my organization. I have a Windows 7 Enterprise 64 Bit machine, Access 2000 Installed and the Legacy App (Is built with something like VB but older) The App uses System ODBC in order to connect to a SQL 2000 DataBase on a Remote Server. I created the ODCB using C:\Windows\SysWOW64\odbcad32.exe app in order to create a System DSN. I did not use the Windows 7 because it is not visible to the Legacy App. I tested the ODBC connection with Access and worked ok, I can access the remote database. Then I run the legacy App as Administrator and the App can see the ODBC, but I´m getting errors on credential validation and I´m getting this error: DIAG [08001] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]SQL Server does not exist or access denied. (17) DIAG [01000] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]ConnectionOpen (Connect()). (53) DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) I use Trusted Connection on the ODBC in order to validate the user by Domain Controller. I think that the credentials are not being sent by the Legacy App to the ODBC, or something like that. I don´t have the source code of the Legacy App in order to debug the connection. Also, I turned off the Firewall. Any ideas?? Thanks in advance!

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • ActionView::TemplateError (integer 23656121084180 too big to convert to `unsigned int')

    - by jaycode
    Hi, this is the weirdest error I've ever got on Rails. Any idea what this may be is? NOTE: the error DOES NOT come from @order.get_invoice_number, I've tried to separate the code into multiple lines and it was clear the problem is within {:host... } ActionView::TemplateError (integer 23656121084180 too big to convert to `unsigned int') on line #56 of app/views/order_mailer/order_detail.text.html.erb: 53: <b>Order #:</b> 54: </td> 55: <td width="98%"> 56: <%= link_to "#{@order.get_invoice_number}", {:host => Thread.current[:host], :controller => 'store/account', :action => 'view_order', id => "#{@order.id}"}, {:target => '_blank'} %> 57: </td> 58: </tr> 59: <tr> app/views/order_mailer/order_detail.text.html.erb:56 app/controllers/store/ test_controller.rb:11:in `order_email'

    Read the article

  • Error in asp.net c# code (mysql database connection)

    - by Ishan
    My code is to update a record if it already exists in database else insert as a new record. My code is as follows: protected void Button3_Click(object sender, EventArgs e) { OdbcConnection MyConnection = new OdbcConnection("Driver={MySQL ODBC 3.51 Driver};Server=localhost;Database=testcase;User=root;Password=root;Option=3;"); MyConnection.Open(); String MyString = "select fil_no,orderdate from temp_save where fil_no=? and orderdate=?"; OdbcCommand MyCmd = new OdbcCommand(MyString, MyConnection); MyCmd.Parameters.AddWithValue("", HiddenField4.Value); MyCmd.Parameters.AddWithValue("", TextBox3.Text); using (OdbcDataReader MyReader4 = MyCmd.ExecuteReader()) { //** if (MyReader4.Read()) { String MyString1 = "UPDATE temp_save SET order=? where fil_no=? AND orderdate=?"; OdbcCommand MyCmd1 = new OdbcCommand(MyString1, MyConnection); MyCmd1.Parameters.AddWithValue("", Editor1.Content.ToString()); MyCmd1.Parameters.AddWithValue("", HiddenField1.Value); MyCmd1.Parameters.AddWithValue("", TextBox3.Text); MyCmd1.ExecuteNonQuery(); } else { // set the SQL string String strSQL = "INSERT INTO temp_save (fil_no,order,orderdate) " + "VALUES (?,?,?)"; // Create the Command and set its properties OdbcCommand objCmd = new OdbcCommand(strSQL, MyConnection); objCmd.Parameters.AddWithValue("", HiddenField4.Value); objCmd.Parameters.AddWithValue("", Editor1.Content.ToString()); objCmd.Parameters.AddWithValue("", TextBox3.Text); // execute the command objCmd.ExecuteNonQuery(); } } } I am getting the error as: ERROR [42000] [MySQL][ODBC 3.51 Driver][mysqld-5.1.51-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order,orderdate) VALUES ('04050040272009','&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&' at line 1 The datatype for fields in table temp_save are: fil_no-->INT(15)( to store a 15 digit number) order-->LONGTEXT(to store contents from HTMLEditor(ajax control)) orderdate-->DATE(to store date) Please help me to resolve my error.

    Read the article

  • Using Mod-Rewrite in XAMPP

    - by rrrfusco
    I've followed some tutorials on how to use Mod_Rewrite, but it's not working out. I have a php index page that takes a page parameter like so: call: index?page=name1, name2, name3 etc. <?php if (isset($_GET['page'])) { switch($_GET['page']) { case 'front': include "front.php"; break; default: break; } } ? I'd like to run mod-rewrite so that the urls display as site.com/name1. Is this possible with the code i'm using above? Below is what I've been trying in the apache config files to no avail. apache/conf/http.conf line 122: LoadModule rewrite_module modules/mod_rewrite.so line 188: DocumentRoot "G:/xampp/htdocs" line 198: #default <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> line 215: <Directory "G:/xampp/htdocs"> line 228: Options Indexes FollowSymLinks Includes ExecCGI line 235: AllowOverride All # cgi line 355: <Directory "G:/xampp/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> G:\xampp\apache\conf\extra\http.v-hosts.conf <VirtualHost *:80> DocumentRoot G:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost <Directory "G:/xampp/htdocs/localhost/"> Options Indexes FollowSymLinks AllowOverride FileInfo Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> DocumentRoot G:/xampp/htdocs/site2/ ServerName site2.localhost ServerAdmin [email protected] <Directory "G:/xampp/htdocs/site2.localhost/"> Options Indexes FollowSymLinks AllowOverride FileInfo Order allow,deny Allow from all </Directory> </VirtualHost> .htaccess file IndexIgnore * RewriteEngine on RewriteRule ^([^/\.]+)/?$ /index.php?page=$1 [L]

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >