Search Results

Search found 22515 results on 901 pages for 'created'.

Page 31/901 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Drupal 6: Creating "ON/OFF News Links" functionality for a block created with View Module.

    - by artmania
    Hi friends, I'm a drupal newbie who needs some advice... I have a news list block at homepage, created with View Module. It is listing all added news' title and link. Everything is cool so far. Now I need to add an ON/OFF option at admin side for homepage news block. When the setting is ON, it will work as it is. When it is OFF, only the titles will be listed with no linking to news detail page. So, now where should I add this ON/OFF option? I have only add/edit/delete pages for each news, there is no common news page to add such option. Should I create an admin page with such ON/OFF option in? Also I think I need to create a db table to keep this ON/OFF status. and controlling this value at homepage block, if it is 1 or 0, and displaying links according to db value :/ it looks too long way Create db table Create an page with ON/OFF option in add php codes to update db for admin's choice get the db value in homepage block to display links, etc. is there any shorter, better way to do what I need? Appreciate helps so much!!! Thanks a lot!!

    Read the article

  • Why newly created entity object with navigation property is automaticly added to ObjectContext?

    - by Levelbit
    I have to entities: Company and Location (one to many). When I create new Location entity object and assign navigation property(Company) with the navigation property of already existing Location object (Location _new = new Location(); _new.Company = _old.Company). It seems that at that point newly created object is added to Object Context automatically, because when I call SaveChanges method that object is insert to database although I didn't call ObjectContext.AddObject(_new). I'm new in EF so there is probably reason why I have result like this? Is there need to assign also CompanyReference filed too and how to do it? IDaoFactory daoFactory = new DaoFactory(); ILocationDao locaitonDao = daoFactory.GetLocationDao(); IEnumerable<Location> locations = locaitonDao.GetLocations(); Location _old = locations.First(); Location _new = new Location(); _new.LocationName = _old.LocationName; _new.Company = _old.Company;// 1 _new.Address = _old.Address; //... ContactEntities.SaveChanges();//2 If I execute line (1) instantly _new object is added to object context and I can see additional datarow in my datagrid after line (2) is executed.

    Read the article

  • When are temporaries created as part of a function call destroyed?

    - by Michael Mrozek
    Is a temporary created as part of an argument to a function call guaranteed to stay around until the called function ends, even if the temporary isn't passed directly to the function? There's virtually no chance that was coherent, so here's an example: class A { public: A(int x) : x(x) {printf("Constructed A(%d)\n", x);} ~A() {printf("Destroyed A\n");} int x; int* y() {return &x;} }; void foo(int* bar) { printf("foo(): %d\n", *bar); } int main(int argc, char** argv) { foo(A(4).y()); } If A(4) were passed directly to foo it would definitely not be destroyed until after the foo call ended, but instead I'm calling a method on the temporary and losing any reference to it. I would instinctively think the temporary A would be destroyed before foo even starts, but testing with GCC 4.3.4 shows it isn't; the output is: Constructed A(4) foo(): 4 Destroyed A The question is, is GCC's behavior guaranteed by the spec? Or is a compiler allowed to destroy the temporary A before the call to foo, invaliding the pointer to its member I'm using?

    Read the article

  • How to create a datastore.Text object out of an array of dynamically created Strings?

    - by Adrogans
    I am creating a Google App Engine server for a project where I receive a large quantity of data via an HTTP POST request. The data is separated into lines, with 200 characters per line. The number of lines can go into the hundreds, so 10's of thousands of characters total. What I want to do is concatenate all of those lines into a single Text object, since Strings have a maximum length of 500 characters but the Text object can be as large as 1MB. Here is what I thought of so far: public void doPost(HttpServletRequest req, HttpServletResponse resp) { ... String[] audioSampleData = new String[numberOfLines]; for (int i = 0; i < numberOfLines; i++) { audioSampleData[i] = req.getReader().readLine(); } com.google.appengine.api.datastore.Text textAudioSampleData = new Text(audioSampleData[0] + audioSampleData[1] + ...); ... } But as you can see, I don't know how to do this without knowing the number of lines before-hand. Is there a way for me to iterate through the String indexes within the Text constructor? I can't seem to find anything on that. Of note is that the Text object can't be modified after being created, and it must have a String as parameter for the constructor. (Documentation here) Is there any way to this? I need all of the data in the String array in one Text object. Many Thanks!

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • How to resize an openGL window created with wglCreateContext?

    - by Nick
    Is it possible to resize an openGL window (or device context) created with wglCreateContext without disabling it? If so how? Right now I have a function which resizes the DC but the only way I could get it to work was to call DisableOpenGL and then re-enable. This causes any textures and other state changes to be lost. I would like to do this without the disable so that I do not have to go through the tedious task of recreating the openGL DC state. HWND hWnd; HDC hDC; void View_setSizeWin32(int width, int height) { // resize the window LPRECT rec = malloc(sizeof(RECT)); GetWindowRect(hWnd, rec); SetWindowPos( hWnd, HWND_TOP, rec->left, rec->top, rec->left+width, rec->left+height, SWP_NOMOVE ); free(rec); // sad panda DisableOpenGL( hWnd, hDC, hRC ); EnableOpenGL( hWnd, &hDC, &hRC ); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-(width/2), width/2, -(height/2), height/2, -1.0, 1.0); // have fun recreating the openGL state.... } void EnableOpenGL(HWND hWnd, HDC * hDC, HGLRC * hRC) { PIXELFORMATDESCRIPTOR pfd; int format; // get the device context (DC) *hDC = GetDC( hWnd ); // set the pixel format for the DC ZeroMemory( &pfd, sizeof( pfd ) ); pfd.nSize = sizeof( pfd ); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; format = ChoosePixelFormat( *hDC, &pfd ); SetPixelFormat( *hDC, format, &pfd ); // create and enable the render context (RC) *hRC = wglCreateContext( *hDC ); wglMakeCurrent( *hDC, *hRC ); } void DisableOpenGL(HWND hWnd, HDC hDC, HGLRC hRC) { wglMakeCurrent( NULL, NULL ); wglDeleteContext( hRC ); ReleaseDC( hWnd, hDC ); }

    Read the article

  • A xml schema created by "Schemagen" of Ant task can customize any more ?

    - by Take
    Now, I have two Java classes like this. public class HogeDomain { private User userDomain; public HogeDomain() { } and getter/setter.. } public class User { public User() { } private String id; private String password; private Date userDate; and getter/setter.. } And then, I created a xml schema above for using "Schemagen" of an Ant task automatically. It's this. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <xs:schema version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="hogeDomain"> <xs:sequence> <xs:element name="userDomain" type="user" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="user"> <xs:sequence> <xs:element name="id" type="xs:string" minOccurs="0"/> <xs:element name="password" type="xs:string" minOccurs="0"/> <xs:element name="userDate" type="xs:dateTime" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:schema> But I really want to create a xml schema like this to using JAXB marshalling or unmarshalling. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <xs:schema version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="hogeDomain"> <xs:sequence> <xs:element name="userDomain" type="user" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:element name="user"> <xs:complexType> <xs:sequence> <xs:element name="id" type="xs:string" minOccurs="0"/> <xs:element name="password" type="xs:string" minOccurs="0"/> <xs:element name="userDate" type="xs:dateTime" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> How to create this xml schema for using a "Schemagen" Ant task ? I don't want to write a xml schema for hand-made. And is there any solutions when if it can't ?

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • GROUP BY and SUM distinct date across 2 tables

    - by kenitech
    I'm not sure if this is possible in one mysql query so I might just combine the results via php. I have 2 tables: 'users' and 'billing' I'm trying to group summed activity for every date that is available in these two tables. 'users' is not historical data but 'billing' contains a record for each transaction. In this example I am showing a user's status which I'd like to sum for created date and deposit amounts that I would also like to sum by created date. I realize there is a bit of a disconnect between the data but I'd like to some all of it together and display it as seen below. This will show me an overview of all of the users by when they were created and what the current statuses are next to total transactions. I've tried UNION as well as LEFT JOIN but I can't seem to get either to work. Union example is pretty close but doesn't combine the dates into one row. ( SELECT created, SUM(status) as totalActive, NULL as totalDeposit FROM users GROUP BY created ) UNION ( SELECT created, NULL as totalActive, SUM(transactionAmount) as totalDeposit FROM billing GROUP BY created ) I've also tried using a date lookup table and joining on the dates but the SUM values are being added multiple times. note: I don't care about the userIds at all but have it in here for the example. users table (where status of '1' denotes "active") (one record for each user) created | userId | status 2010-03-01 | 10 | 0 2010-03-01 | 11 | 1 2010-03-01 | 12 | 1 2010-03-10 | 13 | 0 2010-03-12 | 14 | 1 2010-03-12 | 15 | 1 2010-03-13 | 16 | 0 2010-03-15 | 17 | 1 billing table (record created for every instance of a billing "transaction" created | userId | transactionAmount 2010-03-01 | 10 | 50 2010-03-01 | 18 | 50 2010-03-01 | 19 | 100 2010-03-10 | 89 | 55 2010-03-15 | 16 | 50 2010-03-15 | 12 | 90 2010-03-22 | 99 | 150 desired result: created | sumStatusActive | sumStatusInactive | sumTransactions 2010-03-01 | 2 | 1 | 200 2010-03-10 | 0 | 1 | 55 2010-03-12 | 2 | 0 | 0 2010-03-13 | 0 | 0 | 0 2010-03-15 | 1 | 0 | 140 2010-03-22 | 0 | 0 | 150 Table dump: CREATE TABLE IF NOT EXISTS `users` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `status` smallint(6) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `users` (`created`, `userId`, `status`) VALUES ('2010-03-01', 10, 0), ('2010-03-01', 11, 1), ('2010-03-01', 12, 1), ('2010-03-10', 13, 0), ('2010-03-12', 14, 1), ('2010-03-12', 15, 1), ('2010-03-13', 16, 0), ('2010-03-15', 17, 1); CREATE TABLE IF NOT EXISTS `billing` ( `created` date NOT NULL, `userId` int(11) NOT NULL, `transactionAmount` int(11) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `billing` (`created`, `userId`, `transactionAmount`) VALUES ('2010-03-01', 10, 50), ('2010-03-01', 18, 50), ('2010-03-01', 19, 100), ('2010-03-10', 89, 55), ('2010-03-15', 16, 50), ('2010-03-15', 12, 90), ('2010-03-22', 99, 150);

    Read the article

  • I created a custom (WPF) DataGridBoundColumn and get unexpected behaviour, what am I missing?

    - by aspic
    Hi, I am using a DataGrid (from Microsoft.Windows.Controls.DataGrid) to display items on and on this DataGrid I use a custom Column which extends DataGridBoundColumn. I have bound an ObservableCollection to the ItemSource of the DataGrid. Conversation is one of my own custom datatypes which a (among other things) has a boolean called active. I bound this boolean to the DataGrid as follows: DataGridActiveImageColumn test = new DataGridActiveImageColumn(); test.Header = "Active"; Binding binding1 = new Binding("Active"); test.Binding = binding1; ConversationsDataGrid.Columns.Add(test); My custom DataGridBoundColumn DataGridActiveImageColumn overrides the GenerateElement method to let it return an Image depending on whether the conversation it is called for is active or not. The code for this is: namespace Microsoft.Windows.Controls { class DataGridActiveImageColumn : DataGridBoundColumn { protected override FrameworkElement GenerateElement(DataGridCell cell, object dataItem) { // Create Image Element Image myImage = new Image(); myImage.Width = 10; bool active=false; if (dataItem is Conversation) { Conversation c = (Conversation)dataItem; active = c.Active; } BitmapImage myBitmapImage = new BitmapImage(); // BitmapImage.UriSource must be in a BeginInit/EndInit block myBitmapImage.BeginInit(); if (active) { myBitmapImage.UriSource = new Uri(@"images\active.png", UriKind.Relative); } else { myBitmapImage.UriSource = new Uri(@"images\inactive.png", UriKind.Relative); } // To save significant application memory, set the DecodePixelWidth or // DecodePixelHeight of the BitmapImage value of the image source to the desired // height or width of the rendered image. If you don't do this, the application will // cache the image as though it were rendered as its normal size rather then just // the size that is displayed. // Note: In order to preserve aspect ratio, set DecodePixelWidth // or DecodePixelHeight but not both. myBitmapImage.DecodePixelWidth = 10; myBitmapImage.EndInit(); myImage.Source = myBitmapImage; return myImage; } protected override FrameworkElement GenerateEditingElement(DataGridCell cell, object dataItem) { throw new NotImplementedException(); } } } All this works as expected, and when during the running of the program the active boolean of conversations changes, this is automatically updated in the DataGrid. However: When there are more entries on the DataGrid then fit at any one time (and vertical scrollbars are added) the behavior with respect to the column for all the conversations is strange. The conversations that are initially loaded are correct, but when I use the scrollbar of the DataGrid conversations that enter the view seems to have a random status (although more inactive than active ones, which corresponds to the actual ratio). When I scroll back up, the active images of the Conversations initially shown (before scrolling) are not correct anymore as well. When I replace my custom DataGridBoundColumn class with (for instance) DataGridCheckBoxColumn it works as intended so my extension of the DataGridBoundColumn class must be incomplete. Personally I think it has something to do with the following: From the MSDN page on the GenerateElement method (http://msdn.microsoft.com/en-us/library/system.windows.controls.datagridcolumn.generateelement%28VS.95%29.aspx): Return Value Type: System.Windows. FrameworkElement A new, read-only element that is bound to the column's Binding property value. I do return a new element (the image) but it is not bound to anything. I am not quite sure what I should do. Should I bind the Image to something? To what exactly? And why? (I have been experimenting, but was unsuccessful thus far, hence this post) Thanks in advance.

    Read the article

  • I am getting this error on each machine after installing ruby and rails, I created one web site and

    - by Santodsh
    D:\PROJECTS\RubyOnRail\webapp\Welcome>ruby script\server => Booting WEBrick => Rails 2.3.4 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-01-31 21:19:34] INFO WEBrick 1.3.1 [2010-01-31 21:19:34] INFO ruby 1.8.6 (2007-09-24) [i386-mswin32] [2010-01-31 21:19:34] INFO WEBrick::HTTPServer#start: pid=6576 port=3000 /!\ FAILSAFE /!\ Sun Jan 31 21:19:38 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 /!\ FAILSAFE /!\ Sun Jan 31 21:19:39 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • How can I access a method of the consumer class inside a method created at runtime in a method of th

    - by xxxxxxx
    I define a method inside a parametrized role that needs to create a new class at run time using Moose::Meta::Class->create and apply that exact parametrized role to it. I am also making a new method for that role using $new_class->meta->add_method( some_name => sub { my ($self) = @_; ... }) inside the sub {...} I want to access a method of the consumer class and use it for something, I have tried using $self->get_method, it didn't work, how do I do this? Please notice that the $self inside the sub above is MooseX::Role::Parameterized::Meta::Role::Parameterizable I also have another question, if I do this: my $object = Moose::Meta::Class->create( "some_type", ); Why isn't $object of type some_type and it's some ugly MooseX::Role::Parameterized::Meta::Role::Parameterizable and how do I get to the object of type some_type?

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • Why does a newly created EF-entity throw an ID is null exception when trying to save?

    - by Richard
    I´m trying out entity framework included in VS2010 but ´ve hit a problem with my database/model generated from the graphical interface. When I do: user = dataset.UserSet.CreateObject(); user.Id = Guid.NewGuid(); dataset.UserSet.AddObject(user); dataset.SaveChanges(); {"Cannot insert the value NULL into column 'Id', table 'BarSoc2.dbo.UserSet'; column does not allow nulls. INSERT fails.\r\nThe statement has been terminated."} The table i´m inserting into looks like so: -- Creating table 'UserSet' CREATE TABLE [dbo].[UserSet] ( [Id] uniqueidentifier NOT NULL, [Name] nvarchar(max) NOT NULL, [Username] nvarchar(max) NOT NULL, [Password] nvarchar(max) NOT NULL ); GO -- Creating primary key on [Id] in table 'UserSet' ALTER TABLE [dbo].[UserSet] ADD CONSTRAINT [PK_UserSet] PRIMARY KEY CLUSTERED ([Id] ASC); GO Am I creating the object in the wrong way or doing something else basic wrong?

    Read the article

  • why cacti is showing empty graph.??.even if rrd file created..

    - by Divya mohan Singh
    hii, i have develop my own snmp service..and i want to plot a graph of an OID provided. so, i have create graph in cacti. -) Its is showing device up. -) It is creating rrd file.(RRDTool says OK). -) showing the graph but its empty. but when i check it say rrdtool fetch AVERAGE it showing me all the values nan only..the monitored OID is having value 47 and i have set min=0 and max=100 i am using cacti appliance by rpath http://www.rpath.org/ui/#/appliances?id=http://www.rpath.org/api/products/cacti-appliance still i can show value on graph.. where is the problem??can anyone plz tell me??

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • Can I make clojure macro that will allow me to get a list of all functions created by the macro?

    - by Rob Lachlan
    I would like to have a macro which I'll call def-foo. Def-foo will create a function, and then will add this function to a set. So I could call (def-foo bar ...) (def-foo baz ...) And then there would be some set, e.g. all-foos, which I could call: all-foos => #{bar, baz} Essentially, I'm just trying to avoid repeating myself. I could of course define the functions in the normal way, (defn bar ...) and then write the set manually. A better alternative, and simpler than the macro idea, would be to do: (def foos #{(defn bar ...) (defn baz ...)} ) But I'm still curious as to whether there is a good way for the macro idea to work.

    Read the article

  • Is an object in objective-c EVER created without going through alloc?

    - by Jared P
    Hi, I know that there are functions in the objective-c runtime that allow you to create objects directly, such as class_createInstance. What I would like to know is if anything actually uses these functions other than the root classes' (NSObject) alloc method. I think things like KVC bindings might, but those aren't present on the iPhone OS (to my knowledge, correct me if I'm wrong), so is there anything that would do this? In case you're wondering/it matters, I'm looking to allocate the size of an instance in a way that circumvents the objc runtime by declaring no ivars on a class, but overriding the +alloc method and calling class_createInstance(self, numberofbytesofmyivars). Thanks

    Read the article

  • Terminate long running thread in thread pool that was created using QueueUserWorkItem(win 32/nt5).

    - by Jake
    I am programming in a win32 nt5 environment. I have a function that is going to be called many times. Each call is atomic. I would like to use QueueUserWorkItem to take advantage of multicore processors. The problem I am having is I only want to give the function 3 seconds to complete. If it has not completed in 3 seconds I want to terminate the thread. Currently I am doing something like this: HANDLE newThreadFuncCall= CreateThread(NULL,0,funcCall,&func_params,0,NULL); DWORD result = WaitForSingleObject(newThreadFuncCall, 3000); if(result == WAIT_TIMEOUT) { TerminateThread(newThreadFuncCall,WAIT_TIMEOUT); } I just spawn a single thread and wait for 3 seconds or it to complete. Is there anyway to do something similar to but using QueueUserWorkItem to queue up the work? Thanks!

    Read the article

  • Problem with importing an mdf created with SQL EXPRESS 2008 into SQL SERVER 2005 [an ASP.NET MVC pro

    - by user252160
    the question is probably extremely easy to resolve, but I need to resolve it because I need to carry on with my project. I am using SQLEXPRESS 2008 at home, and I've been working on an ASP.NET MVC app that stores my DB in an mdf file in the project's folder. The problem is that the SQL Server in the Uni labs is SQL SERVER 2005, and when I try to open the mdf file with the VS Server Explorer,It says that the version of the mdf file is more than the server can accept. The only oprion that comes to my mind is exporting the DB as an sql file, just like I've done it thousand times with phpmyadmin. the thing is that the SQL MANAGEMENT STUDIO EXPRESS is not the most usable tool in the world, and for some strange reason all the articles I could find in Google were irrelevant. Please, help.

    Read the article

  • How do I add an icon as a classpath resource to an SWT window created with WindowBuilder?

    - by Zoot
    I'm trying to add an external icon from an *.ico file to a window that I'm creating using the WindowBuilder design window. I can select the shell, which brings up an "image" properties field. That brings up the image chooser dialog box: How do I make my icon show up in this menu as a classpath resource? The image works if an absolute path is given, but I don't want to use that option in my application. Thanks!

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • Problem with memset after an instance of a user defined class is created and a file is opened

    - by Liberalkid
    I'm having a weird problem with memset, that was something to do with a class I'm creating before it and a file I'm opening in the constructor. The class I'm working with normally reads in an array and transforms it into another array, but that's not important. The class I'm working with is: #include <vector> #include <algorithm> using namespace std; class PreProcess { public: PreProcess(char* fileName,char* outFileName); void SortedOrder(); private: vector< vector<double > > matrix; void SortRow(vector<double> &row); char* newFileName; vector< pair<double,int> > rowSorted; }; The other functions aren't important, because I've stopped calling them and the problem persists. Essentially I've narrowed it down to my constructor: PreProcess::PreProcess(char* fileName,char* outFileName):newFileName(outFileName){ ifstream input(fileName); input.close(); //this statement is inconsequential } I also read in the file in my constructor, but I've found that the problem persists if I don't read in the matrix and just open the file. Essentially I've narrowed it down to if I comment out those two lines the memset works properly, otherwise it doesn't. Now to the context of the problem I'm having with it: I wrote my own simple wrapper class for matrices. It doesn't have much functionality, I just need 2D arrays in the next part of my project and having a class handle everything makes more sense to me. The header file: #include <iostream> using namespace std; class Matrix{ public: Matrix(int r,int c); int &operator()(int i,int j) {//I know I should check my bounds here return matrix[i*columns+j]; } ~Matrix(); const void Display(); private: int *matrix; const int rows; const int columns; }; Driver: #include "Matrix.h" #include <string> using namespace std; Matrix::Matrix(int r,int c):rows(r),columns(c) { matrix=new int[rows*columns]; memset(matrix,0,sizeof(matrix)); } const void Matrix::Display(){ for(int i=0;i<rows;i++){ for(int j=0;j<columns;j++) cout << (*this)(i,j) << " "; cout << endl; } } Matrix::~Matrix() { delete matrix; } My main program runs: PreProcess test1(argv[1],argv[2]); //test1.SortedOrder(); Matrix test(10,10); test.Display(); And when I run this with the input line uncommented I get: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371727776 32698 -1 0 0 0 0 0 6332656 0 -1 -1 0 0 6332672 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371732704 32698 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I really don't have a clue what's going on in memory to cause this, on a side note if I replace memset with: for(int i=0;i<rows*columns;i++) *(matrix+i) &= 0x0; Then it works perfectly, it also works if I don't open the file. If it helps I'm running GCC 64-bit version 4.2.4 on Ubuntu.I assume there's some functionality of memset that I'm not properly understanding.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >