Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 501/856 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • C++ boost mpl vector

    - by Gokul
    I understand that the following code won't work, as i is a runtime parameter and not a compile time parameter. But i want to know, whether there is a way to achieve the same. i have a list of classes and i need to call a template function, with each of these classes. void GucTable::refreshSessionParams() { typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; for( int i = 0; i < boost::mpl::size<SessionParams>::value; ++i ) boost::mpl::at<SessionParams, i>::type* sparam = g_getSessionParam< boost::mpl::at<SessionParams, i>::type >(); sparam->updateFromGucTable(this); } } Can someone suggest me a easy and elegant way to perform the same? i need to iterate through the mpl::vector and use the type to call a global function and then use that parameter to do some run-time operations. Thanks in advance, Gokul. Working code typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; class GucSessionIterator { private: GucTable& m_table; public: GucSessionIterator(GucTable& table) :m_table(table) { } template< typename U > void operator()(const U& ) { g_getSessionParam<U>()->updateFromGucTable(m_table); } }; void GucTable::refreshSessionParams() { boost::mpl::for_each< SessionParams >( GucSessionIterator(*this) ); return; }

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • How can I tell the size of my app during development?

    - by Newbyman
    My programming decissions are directly related to how much room I have left, or worse perhaps how much I need to shave off in order to get up the 10mb limit. I have read that Apple has quietly increased the 3G & Edge download limit from 10mb up to 20mb in preparation for the iPad in April. Either way, my real question is how can I gauge a rough estimate of how large my app will end while I'm still in the development phase? Is the file size of my development folder roughly 1 to 1 ratio? Is the compressed file size of my development a better approximation? My .xcodeproj file is only a couple hundred kB, but the size of my folder is 11.8 MB. I have a .sqlite database, less than 20 small png images and a Settings.Bundle. The rest are unknown Xcode files related to build, build for iphoneOS, simulator etc.... My source code is rather large with around 1000 lines in most of the major controllers, all in all around 48 .h&.m files. But my classes folder inside my development folder is less than 800kb. Digging around inside my Build file, there is lots of iphone simulator files and debugging files which I don't think will contribute to the final product. The Application file states that it is around 2.3 MB. However, this is such a large difference from the 11.8 MB, I have to wonder if this is just another piece of the equation. I have the app on the my device, I'm in the testing phase. Therefore, I though that I would try to see how large the working version was on the device by checking in iTunes, however my development app is visible on the right-hand the application's iphone screen, but no information about the app most importantly its size. I also checked in Organizer, I used the lower portion of the screen-(Applications), found my application and selected the drop down arrow which gave my "Application Data" and a download arrow button to the right to save a file on my desktop, named with the unique AppleID. Inside the folder it had three folders-(documents, library, tmp) the documents had a copy of my .sqlite database, the library a few more files but not anything obvious or of size, and the tmp was empty. All in all the entire folder was only 164kb-which tells me that this is not the right place to find the size either. I understand that the size is considered to be the size of my binary plus all the additional files and images that I have add. Does anyone have a effective way of guaging how large the binary is or the relating the development folder size to what the final App Store application size will end up. I know that questions have been posted with similar aspects, but I could not find any answered post that really described...what files, or how to determine size specifically. I know that this question looks like a book, but I just wanted to be specific in conveying exactly what I'm looking for and the attempts thus far. *Note all files are unzipped and still in regular working Xcode order of a single app with no brought-in builds or referenced projects. I'm sure that this is straight forward, I just don't know where to look?

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • Proftpd log issue

    - by ggenov
    Hi, I'm developing an application where I read the current changes in certain folders on the server via FTP. The problem is that on files which begin with non-standart symbols as "~" the FTP server doesn't write full path name in the log Here is a part of my log: myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [STOR] [/home/myuser/public_html/galleries/~ajax-loader.gif] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MFMT] [-] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:11 +0300] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [PASV] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [STOR] [/home/myuser/public_html/galleries/~~ajax-loader.gif                                                                                            ] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [MFMT] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:12 +0300] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:22 +0300] ::ffff:217.145.84.66 [DELE] [] As you can see, all operations work very well except [DELE] command. My proftpd.conf file and its log's settings: LogFormat               default "%U %t %a [%m] [%f]" ExtendedLog             /var/log/proftpd ALL default #SystemLog               /var/log/proftpd ALL default #TransferLog             /var/log/proftpd ALL default Does anybody have a solution for this issue ?

    Read the article

  • Representing complex scheduled reoccurance in a database

    - by David Pfeffer
    I have the interesting problem of representing complex schedule data in a database. As a guideline, I need to be able to represent the entirety of what the iCalendar -- ics -- format can represent, but in a database. I'm not actually implementing anything relating to ics, but it gives a good scope of the type of rules I need to be able to model. I need to allow allow representation of a single event or a reoccurring event based on multiple times per day, days of the week, week of a month, month, year, or some combination of those. For example, the third Thursday in November annually, or the 25th of December annually, or every two weeks starting November 2 and continuing until September 8 the following year. I don't care about insertion efficiency but query efficiency is critical. The operation I will be doing most often is providing either a single date/time or a date/time range, and trying to determine if the defined schedule matches any part of the date/time range. Other operations can be slower. For example, given January 15, 2010 at 10:00 AM through January 15, 2010 at 11:00 AM, find all schedules that match at least part of that time. (i.e. a schedule that covers 10:30 - 11:00 still matches.) Any suggestions? I looked at http://stackoverflow.com/questions/1016170/how-would-one-represent-scheduled-events-in-an-rdbms but it doesn't cover the scope of the type of reoccurance rules I'd like to model.

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

  • How do I use accepts_nested_attributes_for?

    - by Angela
    Editing my question for conciseness and to update what I've done: How do I model having multiple Addresses for a Company and assign a single Address to a Contact, and be able to assign them when creating or editing a Contact? I want to use nested attributes to be able to add an address at the time of creating a new contact. That address exists as its own model because I may want the option to drop-down from existing addresses rather than entering from scratch. I can't seem to get it to work. I get a undefined method `build' for nil:NilClass error Here is my model for Contacts: class Contact < ActiveRecord::Base attr_accessible :first_name, :last_name, :title, :phone, :fax, :email, :company, :date_entered, :campaign_id, :company_name, :address_id, :address_attributes belongs_to :company belongs_to :address accepts_nested_attributes_for :address end Here is my model for Address: class Address < ActiveRecord::Base attr_accessible :street1, :street2, :city, :state, :zip has_many :contacts end I would like, when creating an new contact, access all the Addresses that belong to the other Contacts that belong to the Company. So here is how I represent Company: class Company < ActiveRecord::Base attr_accessible :name, :phone, :addresses has_many :contacts has_many :addresses, :through => :contacts end Here is how I am trying to create a field in the View for _form for Contact so that, when someone creates a new Contact, they pass the address to the Address model and associate that address to the Contact: <% f.fields_for :address, @contact.address do |builder| %> <p> <%= builder.label :street1, "Street 1" %> </br> <%= builder.text_field :street1 %> <p> <% end %> When I try to Edit, the field for Street 1 is blank. And I don't know how to display the value from show.html.erb. At the bottom is my error console -- can't seem to create values in the address table: My Contacts controller is as follows: def new @contact = Contact.new @contact.address.build # Iundefined method `build' for nil:NilClass @contact.date_entered = Date.today @campaigns = Campaign.find(:all, :order => "name") if params[:campaign_id].blank? else @campaign = Campaign.find(params[:campaign_id]) @contact.campaign_id = @campaign.id end if params[:company_id].blank? else @company = Company.find(params[:company_id]) @contact.company_name = @company.name end end def create @contact = Contact.new(params[:contact]) if @contact.save flash[:notice] = "Successfully created contact." redirect_to @contact else render :action => 'new' end end def edit @contact = Contact.find(params[:id]) @campaigns = Campaign.find(:all, :order => "name") end Here is a snippet of my error console: I am POSTING the attribute, but it is not CREATING in the Address table.... Processing ContactsController#create (for 127.0.0.1 at 2010-05-12 21:16:17) [POST] Parameters: {"commit"="Submit", "authenticity_token"="d8/gx0zy0Vgg6ghfcbAYL0YtGjYIUC2b1aG+dDKjuSs=", "contact"={"company_name"="Allyforce", "title"="", "campaign_id"="2", "address_attributes"={"street1"="abc"}, "fax"="", "phone"="", "last_name"="", "date_entered"="2010-05-12", "email"="", "first_name"="abc"}} Company Load (0.0ms)[0m [0mSELECT * FROM "companies" WHERE ("companies"."name" = 'Allyforce') LIMIT 1[0m Address Create (16.0ms)[0m [0;1mINSERT INTO "addresses" ("city", "zip", "created_at", "street1", "updated_at", "street2", "state") VALUES(NULL, NULL, '2010-05-13 04:16:18', NULL, '2010-05-13 04:16:18', NULL, NULL)[0m Contact Create (0.0ms)[0m [0mINSERT INTO "contacts" ("company", "created_at", "title", "updated_at", "campaign_id", "address_id", "last_name", "phone", "fax", "company_id", "date_entered", "first_name", "email") VALUES(NULL, '2010-05-13 04:16:18', '', '2010-05-13 04:16:18', 2, 2, '', '', '', 5, '2010-05-12', 'abc', '')[0m

    Read the article

  • Accelerometer gravity components

    - by Dvd
    Hi, I know this question is definitely solved somewhere many times already, please enlighten me if you know of their existence, thanks. Quick rundown: I want to compute from a 3 axis accelerometer the gravity component on each of these 3 axes. I have used 2 axes free body diagrams to work out the accelerometer's gravity component in the world X-Z, Y-Z and X-Y axes. But the solution seems slightly off, it's acceptable for extreme cases when only 1 accelerometer axis is exposed to gravity, but for a pitch and roll of both 45 degrees, the combined total magnitude is greater than gravity (obtained by Xa^2+Ya^2+Za^2=g^2; Xa, Ya and Za are accelerometer readings in its X, Y and Z axis). More detail: The device is a Nexus One, and have a magnetic field sensor for azimuth, pitch and roll in addition to the 3-axis accelerometer. In the world's axis (with Z in the same direction as gravity, and either X or Y points to the north pole, don't think this matters much?), I assumed my device has a pitch (P) on the Y-Z axis, and a roll (R) on the X-Z axis. With that I used simple trig to get: Sin(R)=Ax/Gxz Cos(R)=Az/Gxz Tan(R)=Ax/Az There is another set for pitch, P. Now I defined gravity to have 3 components in the world's axis, a Gxz that is measurable only in the X-Z axis, a Gyz for Y-Z, and a Gxy for X-Y axis. Gxz^2+Gyz^2+Gxy^2=2*G^2 the 2G is because gravity is effectively included twice in this definition. Oh and the X-Y axis produce something more exotic... I'll explain if required later. From these equations I obtained a formula for Az, and removed the tan operations because I don't know how to handle tan90 calculations (it's infinity?). So my question is, anyone know whether I did this right/wrong or able to point me to the right direction? Thanks! Dvd

    Read the article

  • Html.ListBox() and MultiselectList

    - by Ivan90
    Hi guys, I've a little problem with an Html.ListBox! I am developing a personal blog in ASP.NET MVC 1.0 and I created an adminpanel where I can add and edit a post! During this two operations, I can add also tags! I think of use an Html.ListBox() helper to list all tags, and so I can select multiple tags to add in a post! The problem isn't during the add mode, but in the edit mode, where I have to pre-select post's tags! I read that I have to use a MultiSelectList and so in its constructor pass, tags' list and tag's list(pre-selected value). But I don't know how to use this class! I post, some code: This is my models method that get all list tags in selectlist public IEnumerable<SelectListItem> GetTagsListBox() { return from t in db.Tags orderby t.IDTag descending select new SelectListItem { Text = t.TagName, Value = t.IDTag.ToString(), }; } So in Edit (Get and Post), Add(Get and Post) I use a ViewData to pass this list in Html.ListBox(). ViewData["Tags"] = tagdb.GetTagsListBox(); And in my view <%=Html.ListBox("Tags",ViewData["Tags"] as SelectList) %> So with this code it's ok in Add Mode! But in Edit Mode I need to pre-select those values! So Now, of course I have to create a method that get all tagsbypostid! and then in ViewData what Must I to pass? Any suggest?

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Another rsAccessDenied problem with SSRS

    - by Rich.Carpenter
    I've read through a lot of posts regarding the problem, but none of the proposed solutions have worked for me. I continue to get an error stating, "The permissions granted to user '\Rich' are insufficient for performing this operation. (rsAccessDenied)." If I am logged in as the local administrator account, entering the Reporting Services URL in IE doesn't give me that error, but it takes me to a blank page. I haven't been able to get to a SSRS home page at all. Order of operations: I installed and patched Windows 7 Ultimate 64-bit I installed SQL Server Express 2008 with Advanced Services using the MS web installer. I downloaded and installed SP1 for SQL Server Express 2008. I've tried running IE as administrator, adding local machine to trusted sites, and just about every other suggestion I've found. I even ran the entire installation logged in as the local administrator. Nothing seems to work. Could someone please tell me, considering the above installation process, what I should expect to do after to make this work?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

  • Working with extra fields in an Inline form - save_model, save_formset, can't make sense of the diff

    - by magicrebirth
    Suppose I am in the usual situation where there're extra fields in the many2many relationship: class Person(models.Model): name = models.CharField(max_length=128) class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') class Membership(models.Model): person = models.ForeignKey(Person) group = models.ForeignKey(Group) date_joined = models.DateField() invite_reason = models.CharField(max_length=64) # other models which are unrelated to the ones above.. class Trip(models.Model): placeVisited = models.ForeignKey(Place) visitor = models.ForeignKey(Person) pleasuretrip = models.Boolean() class Place(models.Model): name = models.CharField(max_length=128) I want to add some extra fields in the Membership form that gets displayed through the Inline. These fields basically are a shortcut to the instantiation of another model (Trip). Trip can have its own admin views, but these shortcuts are needed because when my project partners are entering 'Membership' data in the system they happen to have also the 'Trip' information handy (and also because some of the info in Membership can just be copied over to Trip etc. etc.). So all I want to have is two extra fields in the Membership Inline - placeVisited and pleasuretrip - which together with the Person instance will let me instantiate the Trip model in the background... I found out I can easily add extra fields to the inline view by defining my own form. But once the data have been entered, how and when to reference to them in order to perform the save operations I need to do? class MyForm(forms.ModelForm): place = forms.ModelChoiceField(required=False, queryset=Place.objects.all(), label="place",) pleasuretrip = forms.BooleanField(required=False, label="...") class MembershipInline(admin.TabularInline): model = Membership form = MyForm def save_model(self, request, obj, form, change): place = form.place pleasuretrip = form.pleasuretrip person = form.person .... # now I can create Trip instances with those data .... obj.save() class GroupAdmin(admin.ModelAdmin): model = Group .... inlines = (MembershipInline,) This doesn't seem to work... I'm also a bit puzzled by the save_formset method... maybe is that the one I should be using? Many thanks in advance for the help!!!!

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >