Search Results

Search found 1703 results on 69 pages for 'grails constraints'.

Page 61/69 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

  • Building out a well-structured service layer

    - by Chris Stewart
    First, I want to say that it has been awhile since I've gotten into the kind of detail I am at currently. Lately, I've been very much in the SharePoint world and my entire thought process was focused there for quite some time. I'm very glad to be creating databases again, writing "lower level" code to deal with data access, and so forth. I'm working on a very simple web application and taking the opportunity to reacquaint myself with the way I used to structure my projects and various layers of code. For instance, I might have created something like this the last time I went about building something basic from scratch: - MyProject/ -- Domain/ --- Impl/ ---- Person -- Model/ --- IPersonRepository --- Impl/ ---- PersonRepository : IPersonRepository -- Services --- IPersonService --- Impl/ ---- PersonService : IPersonService That would have been the project I did the real work in, and then referenced in the ASP.NET project. My approach was very much inspired by what I saw from the CodeCampServer project as at that time ASP.NET MVC was still very new and it was the only open project I could find actively being developed, and by solid people at that. What ways are you going about structuring your projects and code, when it comes to a general problem you're working on? Certainly various problems can put constraints on this, but assume it's a basic problem without specific needs that affect the structure and layout of your code.

    Read the article

  • Dealing with SQLException with spring,hibernate & Postgres

    - by mad
    Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws & hibernate & postgres who will be used in a national application (means a lot of users) Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException. I have a table with a keyword on the dabatase & a unique constraint on the keywords : no duplicate keywords is allowed. I have found twows ways to deal with with that in the Insert Dao: 1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database 2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices. Solution 2: Spring has developped a way to translate specific exeptions into customized exceptions. see http://www.oracle.com/technology/pub/articles/marx_spring.html In my case i would have a ConstraintViolationException. Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException. But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database. Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Servlet Security question about j_security_check, j_username and j_password

    - by Nitesh Panchal
    Hello, I used jdbcRealm in my web application and it's working fine. I defined all constraints also in my web.xml. Like all pages of url pattern /Admin/* should be accessed by only admin. I have a login form with uses standard j_security_check, j_username and j_password. Now, when i type Admin/home.jsf it rightly redirects me login.jsf and there when i type the password i am redirected to home.jsf. This works alright but problem comes i directly go to login.jsf and then type password and username. This time it again redirects me to login.jsf. Is there any way through which i can specify which page to go when successful login is there? I need to specify different different pages for different roles. For Admin, it is /Admin/home.jsf for general users it is /General/home.jsf because login form is shared between different type of users. Where do i specify all these things? Secondly, i want to have a remember me checkbox at the end of login form. How do i do this? By default, it is submitted to j_security_check servlet and i have no control over its execution. Please help. This doesn't seem so hard but looks like i am missing something.

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Tree deletion with NHibernate

    - by Tigraine
    Hi, I'm struggling with a little problem and starting to arrive at the conclusion it's simply not possible. I have a Table called Group. As with most of these systems Group has a ParentGroup and a Children collection. So the Table Group looks like this: Group -ID (PK) -Name -ParentId (FK) I did my mappings using FNH AutoMappings, but I had to override the defaults for this: p.References(x => x.Parent) .Column("ParentId") .Cascade.All(); p.HasMany(x => x.Children) .KeyColumn("ParentId") .ForeignKeyCascadeOnDelete() .Cascade.AllDeleteOrphan() .Inverse(); Now, the general idea was to be able to delete a node and all of it's children to be deleted too by NH. So deleting the only root node should basically clear the whole table. I tried first with Cascade.AllDeleteOrphan but that works only for deletion of items from the Children collection, not deletion of the parent. Next I tried ForeignKeyCascadeOnDelete so the operation gets delegated to the Database through on delete cascade. But once I do that MSSql2008 does not allow me to create this constraint, failing with : Introducing FOREIGN KEY constraint 'FKBA21C18E87B9D9F7' on table 'Group' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Well, and that's it for me. I guess I'll just loop through the children and delete them one by one, thus doing a N+1. If anyone has a suggestion on how do that more elegantly I'd be eager to hear it.

    Read the article

  • need help optimizing oracle query

    - by deming
    I need help in optimizing the following query. It is taking a long time to finish. It takes almost 213 seconds . because of some constraints, I can not add an index and have to live with existing ones. INSERT INTO temp_table_1 ( USER_ID, role_id, participant_code, status_id ) WITH A AS (SELECT USER_ID user_id,ROLE_ID, STATUS_ID,participant_code FROM USER_ROLE WHERE participant_code IS NOT NULL), --1 B AS (SELECT ROLE_ID FROM CMP_ROLE WHERE GROUP_ID = 3), C AS (SELECT USER_ID FROM USER) --2 SELECT USER_ID,ROLE_ID,PARTICIPANT_CODE,MAX(STATUS_ID) FROM A INNER JOIN B USING (ROLE_ID) INNER JOIN C USING (USER_ID) GROUP BY USER_ID,role_id,participant_code ; --1 = query when ran alone takes 100+ seconds --2 = query when ran alone takes 19 seconds DELETE temp_table_1 WHERE ROWID NOT IN ( SELECT a.ROWID FROM temp_table_1 a, USER_ROLE b WHERE a.status_id = b.status_id AND ( b.ACTIVE IN ( 1 ) OR ( b.ACTIVE IN ( 0,3 ) AND SYSDATE BETWEEN b.effective_from_date AND b.effective_to_date )) ); It seems like the person who wrote the query is trying to get everything into a temp table first and then deleting records from the temp table. whatever is left is the actual results. Can't it be done such a way that there is no need for the delete? We just get the results needed since that will save time?

    Read the article

  • Zend hostname route doesn't match when it has child routes

    - by talisker
    I am implementing an Admin module, which contains the following routes: 'router' => array( 'routes' => array( 'admin' => array( 'type' => 'Zend\Mvc\Router\Http\Hostname', 'options' => array( 'route' => ':subdomain.mydomain.local', 'constraints' => array( 'subdomain' => 'admin', ), 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Index', 'action' => 'index', ), ), 'priority' => 9000, 'may_terminate' => true, 'child_routes' => array( 'users' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/users', 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Users', 'action' => 'index', ), ), ), ) ), ), ), And this is the home route configuration: 'home' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/', 'defaults' => array( 'controller' => 'Application\Controller\Index', 'action' => 'index', ), ), ), When I try to access to http://admin.mydomain.com, the route match always with the homeroute, but if I remove all the child routes from the admin route, the behavior is correct and a http://admin.mydomain.com matches with the adminroute. Any idea?

    Read the article

  • How to solve generic algebra using solver/library programmatically? Matlab, Mathematica, Wolfram etc?

    - by DevDevDev
    I'm trying to build an algebra trainer for students. I want to construct a representative problem, define constraints and relationships on the parameters, and then generate a bunch of Latex formatted problems from the representation. As an example: A specific question might be: If y < 0 and (x+3)(y-5) = 0, what is x? Answer (x = -3) I would like to encode this as a Latex formatted problem like. If $y<0$ and $(x+constant_1)(y+constant_2)=0$ what is the value of x? Answer = -constant_1 And plug into my problem solver constant_1 > 0, constant_1 < 60, constant_1 = INTEGER constant_2 < 0, constant_2 > -60, constant_2 = INTEGER Then it will randomly construct me pairs of (constant_1, constant_2) that I can feed into my Latex generator. Obviously this is an extremely simple example with no real "solving" but hopefully it gets the point across. Things I'm looking for ideally in priority order * Solve algebra problems * Definition of relationships relatively straight forward * Rich support for latex formatting (not just writing encoded strings) Thanks!

    Read the article

  • Customizing the Stars Image for Ajaxful_Rating RoR plugin

    - by Kevin
    I'm trying to come up with my own star image that's slightly smaller and different style than the one provided in the gem/plugin, but Ajaxful_rating doesn't have an easy way to do this. Here's what I've figured out so far: The stars.png in the public folder is three 25x25 pixel tiles stacked vertically, ordered empty star, normal star, and hover star. I'm assuming as long as you keep the above constraints, you should be fine without modifying any other files. But what if you want to change the image size of the stars to larger or smaller? I've found where you can change the height in the stylesheets/ajaxful_rating.css .ajaxful-rating{ position: relative; /*width: 125px; this is setted dynamically */ height: 25px; overflow: hidden; list-style: none; margin: 0; padding: 0; background-position: left top; } .ajaxful-rating li{ display: inline; } .ajaxful-rating a, .ajaxful-rating span, .ajaxful-rating .show-value{ position: absolute; top: 0; left: 0; text-indent: -1000em; height: 25px; line-height: 25px; outline: none; overflow: hidden; border: none; } You just need to change every place that says "25px" above to whatever height your new star image is. This works fine but doesn't display the horizontal part correctly. Anyone know where I would look to set the horizontal part as well? (I'm assuming it's in an .rb file somewhere based upon how many stars you specified in your ajaxful_rating setup)

    Read the article

  • JAXB code generation: how to remove a zero occurrence field?

    - by reef
    Hi all, I use JAXB 2.1 to generate Java classes from several XSD files, and I have a problem related to complex type restriction. On of the restrictions modifies the occurence configuration from minOccurs="0" maxOccurs="unbounded" to minOccurs="0" maxOccurs="0". Thus this field is not needed anymore in the restricted type. But actually JAXB generates the restricted class with a [0..1] cardinality instead of 0. By the way the generation is tuned with <xjc:treatRestrictionLikeNewType / so that a XSD restriction is not mapped to a Java class inheritance. Here is an example: Here is the way a field is defined in a complex type A: <element name="qualifier" type="CR" maxOccurs="unbounded" minOccurs="0"/ Here is the way the same field is restricted in another complex type B that restricts A: <element name="qualifier" type="CR" minOccurs="0" maxOccurs="0"/ In the A generated class I have: @XmlElement(name = "qualifier") protected List<CR qualifiers; And in the B generated class I have: protected CR qualifiers; With my poor understanding of JAXB the absence of the XmlElement annotation tells JAXB not to marshall/unmarshall this field. Am I wrong? If I am right is there a way to tell JAXB not to generate the qualifiers field at all? This would be in my opinion a much better generation as it respects the constraints. Any idea, thougths on the topic? Thanks!!

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • ASP.NET enum dropdownlist validation

    - by Arun Kumar
    I have got a enum public enum TypeDesc { [Description("Please Specify")] PleaseSpecify, Auckland, Wellington, [Description("Palmerston North")] PalmerstonNorth, Christchurch } I am binding this enum to drop down list using the following code on page_Load protected void Page_Load(object sender, EventArgs e) { if (TypeDropDownList.Items.Count == 0) { foreach (TypeDesc newPatient in EnumToDropDown.EnumToList<TypeDesc>()) { TypeDropDownList.Items.Add(EnumToDropDown.GetEnumDescription(newPatient)); } } } public static string GetEnumDescription(Enum value) { FieldInfo fi = value.GetType().GetField(value.ToString()); DescriptionAttribute[] attributes = (DescriptionAttribute[])fi.GetCustomAttributes(typeof(DescriptionAttribute), false); if (attributes != null && attributes.Length > 0) return attributes[0].Description; else return value.ToString(); } public static IEnumerable<T> EnumToList<T>() { Type enumType = typeof(T); // Can't use generic type constraints on value types, // so have to do check like this if (enumType.BaseType != typeof(Enum)) throw new ArgumentException("T must be of type System.Enum"); Array enumValArray = Enum.GetValues(enumType); List<T> enumValList = new List<T>(enumValArray.Length); foreach (int val in enumValArray) { enumValList.Add((T)Enum.Parse(enumType, val.ToString())); } return enumValList; } and my aspx page use the following code to validate <asp:DropDownList ID="TypeDropDownList" runat="server" > </asp:DropDownList> <asp:RequiredFieldValidator ID="TypeRequiredValidator" runat="server" ControlToValidate="TypeDropDownList" ErrorMessage="Please Select a City" Text="<img src='Styles/images/Exclamation.gif' />" ValidationGroup="city"></asp:RequiredFieldValidator> But my validation is accepting "Please Specify" as city name. I want to stop user to submit if the city is not selected.

    Read the article

  • How to find nearest week day for an arbitrary date?

    - by Stig Brautaset
    Is there a more elegant way than the below to find the nearest day of the week for a given date using JodaTime? I initially thought setCopy() would be it, but this sets the day to the particular day in the same week. Thus, if ld is 2011-11-27 and day is "Monday" the following function returns 2011-11-21, and not 2011-11-28 as I want. // Note that "day" can be _any_ day of the week, not just weekdays. LocalDate getNearestDayOfWeek(LocalDate ld, String day) { return ld.dayOfWeek().setCopy(day); } Below is a work-around I came up with that works for the particular constraints in my current situation, but I'd love to get help find a completely generic solution that works always. LocalDate getNearestDayOfWeek(LocalDate ld, String day) { LocalDate target = ld.dayOfWeek().setCopy(day); if (ld.getDayOfWeek() > DateTimeConstants.SATURDAY) { target = target.plusWeeks(1); } return target; } Looking more into this I came up with this, which seems to be a more correct solution, though it seems awfully complicated: LocalDate getNearestDayOfWeek(LocalDate ld, String day) { LocalDate target = ld.dayOfWeek().setCopy(day); if (target.isBefore(ld)) { LocalDate nextTarget = target.plusWeeks(1); Duration sincePrevious = new Duration(target.toDateMidnight(), ld.toDateMidnight()); Duration untilNext = new Duration(ld.toDateMidnight(), nextTarget.toDateMidnight()); if (sincePrevious.isLongerThan(untilNext)) { target = nextTarget; } } return target; }

    Read the article

  • Error No 150 mySQL

    - by buddhamagnet
    Hi all. This is making me sweat - I am getting error 150 when I try and create a table in mySQL. I've scoured the forums to no avail. The statement uses foreign key constraints - both tables are InnoDB, all relevant columns have the same data type and both tables have the same charset and collation. Here's the CREATE TABLE and the original CREATE TABLE statement for the table that's being referenced. Any ideas? New table: CREATE TABLE `approval` ( `rev_id` int(10) UNSIGNED NOT NULL, `rev_page` int(10) UNSIGNED NOT NULL, `user_id` int(10) UNSIGNED NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`rev_id`,`rev_page`,`user_id`), KEY `FK_approval_user` (`user_id`), CONSTRAINT `FK_approval_revision` FOREIGN KEY (`rev_id`, `rev_page`) REFERENCES `revision` (`rev_id`, `rev_page`), CONSTRAINT `FK_approval_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Referenced table: CREATE TABLE `revision` ( `rev_id` int(10) unsigned NOT NULL auto_increment, `rev_page` int(10) unsigned NOT NULL, `rev_text_id` int(10) unsigned NOT NULL, `rev_comment` tinyblob NOT NULL, `rev_user` int(10) unsigned NOT NULL default '0', `rev_user_text` varbinary(255) NOT NULL default '', `rev_timestamp` binary(14) NOT NULL default '\0\0\0\0\0\0\0\0\0\0\0\0\0\0', `rev_minor_edit` tinyint(3) unsigned NOT NULL default '0', `rev_deleted` tinyint(3) unsigned NOT NULL default '0', `rev_len` int(10) unsigned default NULL, `rev_parent_id` int(10) unsigned default NULL, PRIMARY KEY (`rev_id`), UNIQUE KEY `rev_page_id` (`rev_page`,`rev_id`), KEY `rev_timestamp` (`rev_timestamp`), KEY `page_timestamp` (`rev_page`,`rev_timestamp`), KEY `user_timestamp` (`rev_user`,`rev_timestamp`), KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=4904 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=1024;

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • What library can I use to do simple, lightweight message passing?

    - by Mike
    I will be starting a project which requires communication between distributed nodes(the project is in C++). I need a lightweight message passing library to pass very simple messages(basically just strings of text) between nodes. The library must have the following characteristics: No external setup required. I need to be able to get everything up-and-running in my code - I don't want to require the user to install any packages or edit any configuration files(other than a list of IP addresses and ports to connect to). The underlying protocol which the library uses must be TCP(or if it is UDP, the library must guarantee the eventual receipt of the message). The library must be able to send and receive arbitrarily large strings(think up to 3GB+). The library needn't support any security mechanisms, fault tolerance, or encryption - I just need it to be fast, simple, and easy to use. I've considered MPI, but concluded it would require too much setup on the user's machine for my project. What library would you recommend for such a project? I would roll my own, but due to time constraints, I don't think that will be feasible.

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • "end()" iterator for back inserters?

    - by Thanatos
    For iterators such as those returned from std::back_inserter(), is there something that can be used as an "end" iterator? This seems a little nonsensical at first, but I have an API which is: template<typename InputIterator, typename OutputIterator> void foo( InputIterator input_begin, InputIterator input_end, OutputIterator output_begin, OutputIterator output_end ); foo performs some operation on the input sequence, generating an output sequence. (Who's length is known to foo but may or may not be equal to the input sequence's length.) The taking of the output_end parameter is the odd part: std::copy doesn't do this, for example, and assumes you're not going to pass it garbage. foo does it to provide range checking: if you pass a range too small, it throws an exception, in the name of defensive programming. (Instead of potentially overwriting random bits in memory.) Now, say I want to pass foo a back inserter, specifically one from a std::vector which has no limit outside of memory constraints. I still need a "end" iterator - in this case, something that will never compare equal. (Or, if I had a std::vector but with a restriction on length, perhaps it might sometimes compare equal?) How do I go about doing this? I do have the ability to change foo's API - is it better to not check the range, and instead provide an alternate means to get the required output range? (Which would be needed anyways for raw arrays, but not required for back inserters into a vector.) This would seem less robust, but I'm struggling to make the "robust" (above) work.

    Read the article

  • Define "Validation in the Model"

    - by sunwukung
    There have been a couple of discussions regarding the location of user input validation: http://stackoverflow.com/questions/659950/should-validation-be-done-in-form-objects-or-the-model http://stackoverflow.com/questions/134388/where-do-you-do-your-validation-model-controller-or-view These discussions were quite old, so I wanted to ask the question again to see if anyone had any fresh input. If not, I apologise in advance. If you come from the Validation in the Model camp - does Model mean OOP representation of data (i.e. Active Record/Data Mapper) as "Entity" (to borrow the DDD terminology) - in which case you would, I assume, want all Model classes to inherit common validation constraints. Or can these rules simply be part of a Service in the Model - i.e. a Validation service? For example, could you consider Zend_Form and it's validation classes part of the Model? The concept of a Domain Model does not appear to be limited to Entities, and so validation may not necessarily need to be confined to this Entities. It seems that you would require a lot of potentially superfluous handing of values and responses back and forth between forms and "Entities" - and in some instances you may not persist the data recieved from user input, or recieve it from user input at all.

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >