Search Results

Search found 31606 results on 1265 pages for 'generate table'.

Page 343/1265 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • EXEC() syntax error using ODBC

    - by Mike Trader
    I have written a little ETL application that I wish to run a few lines of TSQL from. If i enter a simple query like "SELECT * FROM MyTable" everything is fine. All single line commands run as expected. A multiline query like this is also fine: DECLARE @TableName NVARCHAR(MAX) set @TableName = 'MyTable' EXECute ( 'DROP TABLE '+ @TableName ) Howevery when I try and run: DECLARE @TableName NVARCHAR(MAX) OPEN Tables FETCH NEXT FROM Tables INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN EXEC( 'DROP TABLE ' + @TableName ) FETCH NEXT FROM Tables INTO @TableName END I get a syntax error after TABLE in the EXEC() call. I have spent 6 hours trying to figure this out thinking perhaps I need to escape the single quote or something. I just cannot see the problem. A set of fresh eyes would be appreciated.

    Read the article

  • What are some ways to texture map a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a heightmap. To texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use Perlin noise to generate a terrain and then texture it). I already learned about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. (Also, I don't know how I'll draw roads or dirt paths with that.) I'm looking for an efficient solution to realistically texture mapping procedurally-generated terrain.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to play a fixed frequency sound using Python

    - by user98415
    I have tried a number of ways of playing a fixed frequency sound (e.g. 1000Hz) and nothing works. I have downloaded "beep" and that makes no noise. I tried interfacing to pyao, and that had no effect. I tried interfacing to audiere, and get a runtime error indicating the library could not be found, despite installing it from the software centre. Any guidance for installation of appropriate libraries and relevant code would be most appreciated. I cannot generate .mp3/ .wav files for this, but need to generate the tones at run time. Many thanks for you

    Read the article

  • From the Coalface - 3 - Work as hard as you can to be as lazy as you can!

    - by TATWORTH
    The saga of the Change Log A recent conversation reminded me of the need for change logs within a database, to record when various change scripts were run. Creating such the required table is simple. A typical table for this consists of: Id - identity Integer primary key ChangeFileName - NVARCHAR(128) to hold the name of the file run. DateAdded - DateTime non-null with default value of getutcdate() Purpose - NVARCHAR(128) Rerunnable - Bit non-null default 0. By good design of the table only two data values normally need to be supplied. Two stored procedures, one for inserting data and one to list in reverse sequence the log complete the database essentials. The complete implementation can be found in the CommonData solution at http://CommonData.CodePlex.Com By including a call the add Change Log stored procedure, each script can log its name and purpose for posterity. The scripts that were applied to say the UAT system and their sequence of application can be readily identified for running on the Live system. Formatting XML XML is often produced as one continous string with no embedded CR/LF. To get it into human readable form, open it in visual studio, swap to another tab and back and click the format document button. The XML will then be nicely formatted!

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • mysql Incorrect Information in File: (corrupt) error

    - by Nick M.
    I've recently suffered from a power outage on one of my monitoring servers at the office. The result of that outage caused for some database tables to get corrupted. I've successfully repaired 3-4 tables by using the "use_frm" option however there are still 3 that seem to be badly corrupted and are not responding to the mysql REPAIR command (with or without use_frm) mysql> REPAIR TABLE poller_item; +-------------------+--------+----------+---------------------------------------------- ------------+ | Table | Op | Msg_type | Msg_text | +-------------------+--------+----------+------------------------------------------------- ---------+ | cacti.poller_item | repair | Error | Incorrect information in file: './cacti/poller_item.frm' | | cacti.poller_item | repair | error | Corrupt | +-------------------+--------+----------+------------------------------------------------- ---------+ In this scenario are there any other way to repair a table? MySQL Version mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using readline 6.1

    Read the article

  • Image mapping using lookup tables [on hold]

    - by jblasius
    I have an optimization problem. I'm using a look-up table to map a pixel in an image: for (uint32_t index = 0u; index < imgSize; index++) { img[ lt[ index ] ] = val; } Is there a faster way to do this, perhaps using a reinterpret_cast or something like that? I am accessing two different memory addresses, so what is the compiler doing? One solution is to do a set of reads to access adjacent memory addresses. struct mblock { uint32_t buf[10u]; }; mblock mb; for (uint32_t index = 0u; index < imgSize; index += 10u) { mb = *reinterpret_cast<mblock*>(lt + index)); for (uint8_t i = 0u; i < 10u; i ++) { mb.buf[i] += img; } for (uint8_t i = 0u; i < 10u; i ++) { *( mb.buf[i] ) = val; } } This speeds up the code because I'm separating the image access from the table look-up; the positions in the look-up table are adjacent. I still get the image access problem as it is accessing random address positions.

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • What is name server of my domain, pointing to hosted site on Github Pages?

    - by nournia
    I've got one domain name and I want to set it up for my hosted site on Github Pages service. In the documentation this point is mentioned that If you are using a top-level domain like example.com, you must use an A record pointing to 204.232.175.78. But my domain registrar doesn't permit me to add a DNS Record. But only asks me to fill one table like this: Name Server (NS Record) Server Name ......... | Server IP ns53.parsihost.com | 94.232.173.52 ns2.parsihost.com | 206.223.171.254 I asked the registrar about this problem and they said to me "You must put Github's name server in those cells". So, what is the mapping from this table to DNS Records, and what is your advice for filling this kind of table?

    Read the article

  • Are there any reliable solutions for annotations/reflection/code-metadata in C?

    - by dukeofgaming
    Not all languages support java-like annotations or C#-like attributes or code metadata in general, however that doesn't mean it is not possible to have in languages that don't have this. An example is PHP with Stubbles and the Doctrine annotation library. My question is, is there anything like this for C?, or are there any reliable ways of doing reflection with extended code metadata in C? Ideally, I'm looking for something that reads javadoc-like comments. Edit: The reason for me *needing* as opposed to just wanting, is that I need to generate C code and code-metadata from a database, as well as being able to edit that metadada and update the database. The volume of the work (~15,000 variables/structures/functions to generate from this database) justifies the solution.

    Read the article

  • Excel conditional selection?

    - by Andrew
    I think this is a simple question. I have a big table of data points and I want to take a an average of a subset of a single column. For example, if A is "age" and B is "gender," what command could I use to calculate the average age of women in my table? I know I can do this by sorting the table by column B and then only selecting column A values, but I want to build up to much more complicated conditional terms (e.g. if A is 5 and B is 3 and C is 4, then give me the average of D). Thanks!

    Read the article

  • If I drop my clustered PK and add a new one, what order will my rows be in?

    - by stack
    In SQL Server, I'm looking at TableA, which currently has a uniqueidentifier clustered primary key. The GUID has no meaning in any context. (I'll give you a second to clean up your keyboard and monitor and set down the soda.) I'd like to drop that primary key and add a new unique integer primary key to the table. My question is this: when I drop the index, modify the column from uniqueidentifier to int, and add the new clustered unique primary key to the modified column, will the new PK values be in the order of insertion into the table, or will they be in some other order? Is this the right way to go here? Will this work? (I'm kind of a noobkin with regard to table creation/modification.)

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Can you recommend a game server for a facebook board game?

    - by Yekmer Simsek
    I am seeking a game server that will scale well. All commercial and/or free software alternatives are welcome. Game will be a boardgame that is similar to poker. Some technical details are listed below. There will be a table which consists of 4 people, to send them message I need a channel manager. A table will be ready to play for at least 5 minutes. There should be a reliable channel manager. People will wait for some time(i.e.) and if they are not playing they will be kicked by server, so there will be a reliable timed task queue to execute some tasks. It should be quick enough to response and show the changes to all 4 people on that table simultaneously.To achive this server should have a powerfull I/O library. I think to use inmemory to have quick response times, but it comes with scalability problems. And some variables should be thread safe so a variable should be thread safe between multiple nodes. Flash(AS3) and Unity (.Net 2.0 C# mono) client API's should be available for socket connection. PS: I am using Reddwarf server, it lacks of documentation and multiple node.

    Read the article

  • SQL UPDATE based on condition

    - by LtDan
    We need to update a table with the users id (NBK). The table with NBK also has the user status (0 - 1) and only one user will have 1 at a time. The challenage is to #1 capture the active user #2 update the other table with the user NBK. I hope the code below just has a simple syntex error that I cannot find? Dim nb As String Dim NBK As String nb = [Employees]![NBK] & "' WHERE " nb = nb & " " & [Employees]![Status] = '1' NBK = " Update tbl_DateTracking SET NBK = " NBK = NBK & "'" & nb & "' WHERE " NBK = NBK & "CaseId = '" & CaseId & "' AND OCC_Scenario = '" & OCC_Scenario & "' ;" DoCmd.RunSQL nb DoCmd.RunSQL NBK

    Read the article

  • Problems with Ranking when Using Sourcing Rules And ASLs From Blanket Agreements?

    - by LisaO
    Are you using Sourcing Rules and Approved Supplier List with Blanket Purchase Agreements (BPA) and it seems like Ranking is not working correctly? For example:  The Sourcing Rule being used, has effective dates from 01-APR to 31-MAR for 2013, 2014 and 2015. One BPA is defined for Supplier A, which was originally set to Rank 1 with 100% allocation. A new BPA was created for the same item and with same effective dates as the current BPA. The BPA is for a different Supplier. When Generate Sourcing Rules is run it adds the new BPA/Supplier to the Sourcing rule, but its added as Rank 1, with the old rule changed to Rank 2. For complete information refer to  Doc ID 1678447.1 Generate Sourcing Rules And ASLs From Blanket Agreements Ranking not Behaving As Expected. Still have Questions? Access the Procurement Community and, using the 'Start a Discussion' link, post your question.

    Read the article

  • EXCEL function working like SQL group by + count(distinct *)?

    - by Solo
    Suppose I have an EXCEL sheet with below data CODE (COL A) | VALUE (COL B) ============================== A01 | 10 A01 | 20 A01 | 30 A01 | 10 B01 | 30 B01 | 30 Is there an EXCEL function working like .. SELECT CODE, count (Distinct *) FROM TABLE GROUP BY CODE CODE | Distinct Count of Value =================================== A01 | 3 B01 | 1 or, better yet, Can we have an excel formula pasted in Column C to get something like this: CODE (COL A) | VALUE (COL B) | DISTINCT VALUE COUNT WITH MATCHING CODE (COL C) =============================================================================== A01 | 10 | 3 A01 | 20 | 3 A01 | 30 | 3 A01 | 10 | 3 B01 | 30 | 1 B01 | 30 | 1 I know I can use pivot table to get this result easily. However due to reporting requirements I have to append the "distinct count" column to the excel sheet, hence pivot table is not an option. My last resort is to use Excel Macro (Which is fine), but before that I would like to learn whether excel functions can accomplish this kind of task. Many thanks!

    Read the article

  • How compilers know about other classes and their properties?

    - by OnResolve
    I'm writing my first programming language that is object orientated and so far so good with create a single 'class'. But, let's say I want to have to classes, say ClassA and ClassB. Provided these two have nothing to do with each other then all is good. However, say ClassA creates a ClassB--this poses 2 related questions: -How would the compiler know when compiling ClassA that ClassB even exists, and, if it does, how does it know it's properties? My thoughts thus far had been: instead of compiling each class at a time (i.e scan, parse and generate code) each "file (not really file, per se, but a "class") do I need to scan + parse each first, then generate code for all?

    Read the article

  • Copying Tables from a Website

    - by amemus
    I have difficulty making an Excel-readable file from a table on a Website. The problems very specific to my question are: I have to use IE 7 to access the site. Excel is installed in another computer. The site does not let me view the HTML of the table. Normally, I would simply select the table I want and drag and drop it to Excel. Or, I would view the page source and copy the HTML data. Both do not work in this case. Is there any handy tool out there?

    Read the article

  • Creating thumbnails with the same name as the pictures

    - by Duby
    Please, here is my little code for creating thumbnails of pictures saved in a folder named 'pictures', and saving them in another folder named 'thumbs'. ! /bin/bash for i in *.jpg do convert -thumbnail 100 pictures/$i thumbs/$i done However, there two things the program doesn't do: 1) It does not retain the name of the pictures in the thumbnail. For instance, I would want it to generate a thumbnail with the name pic.jpg for a picture named pic.jpg 2) Also, when I run the program, i don't want it to generate the thumbnail for a picture it has already generated its thumbnail, unless that picture has been modified. Any help will be very much appreciated. Thank you

    Read the article

  • OOP Structure for web application

    - by Query
    Ok so I have a website in which users complete tasks to earn points. When they earn enough points, they rise in rank. The site from my understanding is very basic and only executes one query or two queries at most a page. There is a user table, a support ticket table, and an orders table. All of these contain a relational row for username. Our class was familiarized with OOP back in highschool with Java but that was for video games and I could grasp the concept on why you would need a class player and class enemy. However I don't understand it's web application. At least not in my situation. I understand the user class might contain stuff like: getUsername getPoints getEmail setEmail addPoints (does this belong here? OR only things the user can manipulate should be here?) etc.. But I'm at a loss with everything else such as user registration. Can you help give me a wire framework that I could wrap my head around? Pointing me to a good eBook would help greatly

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >