Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 586/728 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Am I crazy? (How) should I create a jQuery content editor?

    - by Brendon Muir
    Ok, so I created a CMS mainly aimed at Primary Schools. It's getting fairly popular in New Zealand but the one thing I hate with a passion is the largely bad quality of in browser WYSIWYG editors. I've been using KTML (made by InterAKT which was purchased by Adobe a few years ago). In my opinion this editor does a lot of great things (image editing/management, thumbnailing and pretty good content editing). Unfortunately time has had its nasty way with this product and new browsers are beginning to break features and generally degrade the performance of this tool. It's also quite scary basing my livelihood on a defunct product! I've been hunting, in fact I regularly hunt around to see if anything has changed in the WYSIWYG arena. The closest thing I've seen that excites me is the WYSIHAT framework, but they've decided to ignore a pretty relevant editing paradigm which I'm going to outline below. This is the idea for my proposed editor, and I don't know of any existing products that can do this properly: Right, so the traditional model for editing let's say a Page in a CMS is to log into a 'back end' and click edit on the page. This will then load another screen with the editor in it and perhaps a few other fields. More advanced CMS's will maybe have several editing boxes that are for different portions of the page. Anyway, the big problem with this way of doing things is that the user is editing a document outside of the final context it will appear in. In the simplest terms, this means the page template. Many things can be wrong, e.g. the with of the editing area might be different to the width of the actual template area. The height is nearly always fixed because existing editors always seem to use IFRAMES for backward compatibility. And there are plenty of other beefs which I'm sure you're quite aware of if you're in this development area. Here's my editor utopia: You click 'Edit Page': The actual page (with its actual template) displays. Portions of the page have been marked as editable via a class name. You click on one of these areas (in my case it'd just be the big 'body' area in the middle of the template) and a editing bar drops down from the top of the screen with all your standard controls (bold, italic, insert image etc...). Iframes are never used, instead we rely on setting contentEditable to true on the DIV's in question. Firefox 2 and IE6 can go away, let's move on. You can edit the page knowing exactly how it will look when you save it. Because all the styles for this template are loaded, your headings will look correct, everything will be just dandy. Is this such a radical concept? Why are we still content with TinyMCE and that other editor that is too embarrassing to use because it sounds like a swear word!? Let's face the facts: I'm a JavaScript novice. I did once play around in this area using the Javascript Anthology from SitePoint as a guide. It was quite a cool learning experience, but they of course used the IFRAME to make their lives easier. I tried to go a different route and just use contentEditable and even tried to sidestep the native content editing routines (execCommand) and instead wrote my own. They kind of worked but there were always issues. Now we have jQuery, and a few libraries that abstract things like IE's lack of Range support. I'm wondering, am I crazy, or is it actually a good idea to try and build an editor around this editing paradigm using jQuery and relevant plugins to make the job easier? My actual questions: Where would you start? What plugins do you know of that would help the most? Is it worth it, or is there a magical project that already exists that I should join in on? What are the biggest hurdles to overcome in a project like this? Am I crazy? I hope this question has been posted on the right board. I figured it is a technical question as I'm wanting to know specific hurdles and pitfalls to watch out for and also if it is technically feasible with todays technology. Looking forward to hearing peoples thoughts and opinions.

    Read the article

  • DIY intellisense on XPath - design approach? (WinForms app)

    - by Cheeso
    I read the DIY Intellisense article on code project, which was referenced from the Mimic Intellisense? question here on SO. I wanna do something similar, DIY intellisense, but for XPath not C#. The design approach used there makes sense to me: maintain a tree of terms, and when the "completion character" is pressed, in the case of C#, a dot, pop up the list of possible completions in a textfield. Then allow the user to select a term from the textfield either through typing, arrow keys, or double-click. How would you apply this to XPath autocompletion? should there be an autocomplete key? In XPath there is no obvious separator key like "dot" in C#. should the popup be triggered explicitly in some other way, let's say ctrl-. ? or should the parser try to autocomplete continuously? If I do the autocomplete continuously, how to scale it properly? There are 93 xpath functions, not counting overloads. I certainly don't want to popup a list of 93 choices. How do I decide when I've narrowed it enough to offer a useful lsit of possible completions? How to populate the tree of possible completions? For C#, it's easy: walk the type space via reflection. At a first level, the "syntax tree" for C# seems like a single tree, and the list of completions at any point depends on the graph of nodes you've traversed to that point. Typing System.Console. traverses to a certain node in that tree, and the list of completions is the set of child nodes available at that node in the tree. On the other hand, the xpath syntax seems like it is a "flatter" tree - function names, axis names, literals. Does this make sense? what have I not considered?

    Read the article

  • iReport / jFreeChart - Combining getItemPaint() override and series colors

    - by barneskd
    I'm attempting to create a bar chart that needs to use specific series colors for N number of categories and a separate color for 1 other category. Here is an example: http://i.imgur.com/1DeCF.png (Sorry, I don't have enough points to embed images yet) In the mockup, Categories 1-3 use a collection of 5 colors to render their series while Category 4 uses a single grey color. My first approach was to override the getItemPaint() method using a customizer class, but I can only figure out how to define a color on the category level, not the series level. Is it possible to define colors on category and/or series levels? Something like, If category != "Category4" Use colors A, B, C, D, and E in category's series Else Use color F in category's series Another thought I had was to combine iReport's Series Colors bar chart property and the getItemPaint() override; that way I could define the 5 colors to use in iReport and use the getItemPaint() override only in the case where the category equals "Category 4". So far I've had no luck combining the two; if the override is defined it overrides iReport's Series Color property.

    Read the article

  • NSTableView Drag and Drop not working

    - by macatomy
    Hi, I'm trying to set up very basic drag and drop for my NSTableView. The table view has a single column (with a custom cell). The column is bound to an NSArrayController, and the array controller's content array is bound to an NSArray on my controller object. The data displays fine in the table. I connected the dataSource and delegate outlets of the table view to my controller object, and then implemented these methods: - (BOOL)tableView:(NSTableView *)aTableView writeRowsWithIndexes:(NSIndexSet *)rowIndexes toPasteboard:(NSPasteboard *)pboard { NSLog(@"dragging"); return YES; } - (NSDragOperation)tableView:(NSTableView*)tv validateDrop:(id <NSDraggingInfo>)info proposedRow:(NSInteger)row proposedDropOperation:(NSTableViewDropOperation)op { return NSDragOperationEvery; } - (BOOL)tableView:(NSTableView *)aTableView acceptDrop:(id <NSDraggingInfo>)info row:(NSInteger)row dropOperation:(NSTableViewDropOperation)operation { return YES; } I also registered the drag types in -awakeFromNib: #define MyDragType @"MyDragType" - (void)awakeFromNib { [super awakeFromNib]; [_myTable registerForDraggedTypes:[NSArray arrayWithObjects:MyDragType, nil]]; } The problem is that the -tableView:writeRowsWithIndexes:toPasteboard: method is never called. I've looked at a bunch of examples and I can't figure out anything I'm doing wrong. Could the problem be that I'm using a custom cell? Is there something I'm supposed to override in the cell subclass to enable this functionality? EDIT: Confirmed. Switching the custom cell for a regular NSTextFieldCell made dragging work. Now, how do I make drag and drop work with my custom cell?

    Read the article

  • change image use javascript DOM

    - by user289346
    <html> <head> <script type="text/javascript"> var curimage = "cottage_small.jpg"; var curtext = "View large image"; function changeSrc() { if (curtext == "View large image"||curimage == "cottage_small.jpg") { document.getElementById("boldStuff").innerHTML = "View small image"; curtext="View small image"; document.getElementById("myImage")= "cottage_large.jpg"; curimage = "cottage_large.jpg"; } else { document.getElementById("boldStuff").innerHTML = "View large image"; curtext = "View large image"; document.getElementById("myImage")= "cottage_small.jpg"; curimage = "cottage_small.jpg"; } } </script> </head> <body> <!-- Your page here --> <h1> Pink Knoll Properties</h1> <h2> Single Family Homes</h2> <p> Cottage:<strong>$149,000</strong><br/> 2 bed, 1 bath, 1,189 square feet, 1.11 acres <br/><br/> <a href="#" onclick="changeSrc()"><b id="boldStuff" />View large image</a> </p> <p><img id="myImage" src="cottage_small.jpg" alt="Photo of a cottage" /></p> </body> </html> This is my coding I need to change the image and text the same time when I click it. I use LTS, it shows the line document.getElementById("myImage")= "cottage_large.jpg"; is a wrong number of arquments or invalid property assigment. Dose someone can help? Bianca

    Read the article

  • MySQL & PHP - Creating Multiple Parent Child Relations

    - by Ashok
    Hi, I'm trying to build a navigation system using categories table with hierarchies. Normally, the table would be defined as follows: id (int) - Primary key name (varchar) - Name of the Category parentid (int) - Parent ID of this Category referenced to same table (Self Join) But the catch is that I require that a category can be child to multiple parent categories.. Just like a Has and Belongs to Many (HABTM) relation. I know that if there are two tables, categories & items, we use a join table categories_items to list the HABTM relations. But here i'm not having two tables but only table but should somehow show HABTM relations to itself. Is this be possible using a single table? If yes, How? If not possible, what rules (table naming, fields) should I follow while creating the additional join table? I'm trying to achieve this using CakePHP, If someone can provide CakePHP solution for this problem, that would be awesome. Even if that's not possible, any solution about creating join table is appreciated. Thanks for your time.

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • Multiple user roles in Ruby on Rails

    - by aguynamedloren
    I am building an inventory management application with four different user types: admin, employee, manufacturer, transporter. I haven't started coding yet, but this is what I'm thinking.. Manufacturers and transporters are related with has_many :through many-to-many association with products as follows: class Manufacturer < ActiveRecord::Base has_many :products has_many :transporters, :through => :products end class Product < ActiveRecord::Base belongs_to :manufacturer belongs_to :transporter end class Transporter < ActiveRecord::Base has_many :products has_many :manufacturers, :through => :products end All four user types will be able to login, but they will have different permissions and views, etc. I don't think I can put them in the same table (Users), however, because they will have different requirements, ie: vendors and manufacturers must have a billing address and contact info (through validations), but admins and employees should not have these fields. If possible, I would like to have a single login screen as opposed to 4 different screens. I'm not asking for the exact code to build this, but I'm having trouble determining the best way to make it happen. Any ideas would be greatly appreciated - thanks!

    Read the article

  • autocommit and @Transactional and Cascading with spring, jpa and hibernate

    - by subes
    Hi, what I would like to accomplish is the following: have autocommit enabled so per default all queries get commited if there is a @Transactional on a method, it overrides the autocommit and encloses all queries into a single transaction, thus overriding the autocommit if there is a @Transactional method that calls other @Transactional annotated methods, the outer most annotation should override the inner annotaions and create a larger transaction, thus annotations also override eachother I am currently still learning about spring-orm and couldn't find documentation about this and don't have a test project for this yet. So my questions are: What is the default behaviour of transactions in spring? If the default differs from my requirement, is there a way to configure my desired behaviour? Or is there a totally different best practice for transactions? --EDIT-- I have the following test-setup: @javax.persistence.Entity public class Entity { @Id @GeneratedValue private Integer id; private String name; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } @Repository public class Dao { @PersistenceContext private EntityManager em; public void insert(Entity ent) { em.persist(ent); } @SuppressWarnings("unchecked") public List<Entity> selectAll() { List<Entity> ents = em.createQuery("select e from " + Entity.class.getName() + " e").getResultList(); return ents; } } If I have it like this, even with autocommit enabled in hibernate, the insert method does nothing. I have to add @Transactional to the insert or the method calling insert for it to work... Is there a way to make @Transactional completely optional?

    Read the article

  • wordpress conditional statements

    - by codedude
    I am using this code in wordpress to display different content when different pages are loaded. I have 5 pages on the site called Home, Bio, Work, Contact and Notes. The Notes page is being used as a blog. Here is the code I am using. <?php if (is_page('contact')) { ?> get in touch with me <?php } elseif (is_single()) { ?> a note from me <?php } elseif (is_page('notes')) { ?> the notes of me <?php } else { ?> the <?php the_title(); ?> of me <?php } ?> So if it the contact page, it displays "get in touch with me" and if it is a single blog post page it displays "a note from me". However this is where I have a problem. The next statement should display "the notes of me" when it is on the Notes page. However, this does not happen. Instead it shows the default content which is in the "else" statement. any idea on why this is happening?

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Rails Polymorphic Association with multiple associations on the same model

    - by Matt Rogish
    My question is essentially the same as this one: http://stackoverflow.com/questions/1168047/polymorphic-association-with-multiple-associations-on-the-same-model However, the proposed/accepted solution does not work, as illustrated by a commenter later. I have a Photo class that is used all over my app. A post can have a single photo. However, I want to re-use the polymorphic relationship to add a secondary photo. Before: class Photo belongs_to :attachable, :polymorphic => true end class Post has_one :photo, :as => :attachable, :dependent => :destroy end Desired: class Photo belongs_to :attachable, :polymorphic => true end class Post has_one :photo, :as => :attachable, :dependent => :destroy has_one :secondary_photo, :as => :attachable, :dependent => :destroy end However, this fails as it cannot find the class "SecondaryPhoto". Based on what I could tell from that other thread, I'd want to do: has_one :secondary_photo, :as => :attachable, :class_name => "Photo", :dependent => :destroy Except calling Post#secondary_photo simply returns the same photo that is attached via the Photo association, e.g. Post#photo === Post#secondary_photo. Looking at the SQL, it does WHERE type = "Photo" instead of, say, "SecondaryPhoto" as I'd like... Thoughts? Thanks!

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Problem with JMX query of Coherence node MBeans visible in JConsole

    - by Quinn Taylor
    I'm using JMX to build a custom tool for monitoring remote Coherence clusters at work. I'm able to connect just fine and query MBeans directly, and I've acquired nearly all the information I need. However, I've run into a snag when trying to query MBeans for specific caches within a cluster, which is where I can find stats about total number of gets/puts, average time for each, etc. The MBeans I'm trying to access programatically are visible when I connect to the remote process using JConsole, and have names like this: Coherence:type=Cache,service=SequenceQueue,name=SEQ%GENERATOR,nodeId=1,tier=back It would make it more flexible if I can dynamically grab all type=Cache MBeans for a particular node ID without specifying all the caches. I'm trying to query them like this: QueryExp specifiedNodeId = Query.eq(Query.attr("nodeId"), Query.value(nodeId)); QueryExp typeIsCache = Query.eq(Query.attr("type"), Query.value("Cache")); QueryExp cacheNodes = Query.and(specifiedNodeId, typeIsCache); ObjectName coherence = new ObjectName("Coherence:*"); Set<ObjectName> cacheMBeans = mBeanServer.queryMBeans(coherence, cacheNodes); However, regardless of whether I use queryMBeans() or queryNames(), the query returns a Set containing... ...0 objects if I pass the arguments shown above ...0 objects if I pass null for the first argument ...all MBeans in the Coherence:* domain (112) if I pass null for the second argument ...every single MBean (128) if I pass null for both arguments The first two results are the unexpected ones, and suggest a problem in the QueryExp I'm passing, but I can't figure out what the problem is. I even tried just passing typeIsCache or specifiedNodeId for the second parameter (with either coherence or null as the first parameter) and I always get 0 results. I'm pretty green with JMX — any insight on what the problem is? (FYI, the monitoring tool will be run on Java 5, so things like JMX 2.0 won't help me at this point.)

    Read the article

  • Tips on managing dependencies for a release?

    - by Andrew Murray
    Our system comprises many .NET websites, class libraries, and a MSSQL database. We use SVN for source control and TeamCity to automatically build to a Test server. Our team is normally working on 4 or 5 projects at a time. We try to lump many changes into a largish rollout every 2-4 weeks. My problem is with keeping track of all the dependencies for a rollout. Example: Website A cannot go live until we've rolled out Branch X of Class library B, built in turn against the Trunk of Class library C, which needs Config Updates Y and Z and Database Update D, which needs Migration Script E... It gets even more complex - like making sure each developer's project is actually compatible with the others and are building against the same versions. Yes, this is a management issue as much as a technical issue. Currently our non-optimal solution is: a whiteboard listing features that haven't gone live yet relying on our memory and intuition when planning the rollout, until we're pretty sure we've thought of everything... a dry-run on our Staging environment. It's a good indication but we're often not sure if Staging is 100% in sync with Live - part of the problem I'm hoping to solve. some amount of winging it on rollout day. So far so good, minus a few close calls. But as our system grows, I'd like a more scientific release management system allowing for more flexibility, like being able to roll out a single change or bugfix on it's own, safe in the knowledge that it won't break anything else. I'm guessing the best solution involves some sort of version numbering system, and perhaps using a project management tool. We're a start-up, so we're not too hot on religiously sticking to rigid processes, but we're happy to start, providing it doesn't add more overhead than it's worth. I'd love to hear advice from other teams who have solved this problem.

    Read the article

  • HELP with XML to XML transformation using XSLT.

    - by kaniths
    Am a rookie trying XSLT and XML tranformations for the first time. To start off, i tried a simple sample programs. I expected the Output in Tree format (maintaining the hierarchy) instead i just get " KING" in single line... What could be the problem? PS: I use XMLSpy. Any guideline would be great full. Thanks :) Input XML: <ROWSET> <ROW> <EMPNO>7839</EMPNO> <ENAME>KING</ENAME> </ROW> </ROWSET> XSL used for transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" encoding="UTF-8" indent="yes" omit-xml-declaration="no"/> <xsl:template match="/"> <Invitation> <To> <xsl:value-of select="ROWSET/ROW/ENAME"/> </To> </Invitation> </xsl:template>

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • Searching a document for multiple terms in VBA?

    - by Tony
    I'm trying to create a macro to be used in Microsoft Word 2007 that will search a document for multiple keywords (string variables) located in an external Excel file (the reason for having it in an external file is that the terms will often be changed and updated). I've figured out how to search a document paragraph by paragraph for a single term and color every instance of that term, and I assumed that the proper method would be to use a dynamic array as the search term variable. The question is: how do I get the macro to create an array containing all the terms from an external file and search each paragraph for each and every term? This is what I have so far: Sub SearchForMultipleTerms() ' Dim SearchTerm As String 'declare search term SearchTerm = InputBox("What are you looking for?") 'prompt for term. this should be removed, as the terms should come from an external XLS file rather than user input. Selection.Find.ClearFormatting Selection.Find.Replacement.ClearFormatti… With Selection.Find .Text = SearchTerm 'find the term! .Forward = True .Wrap = wdFindStop .Format = False .MatchCase = False .MatchWholeWord = False .MatchWildcards = False .MatchSoundsLike = False .MatchAllWordForms = False End With While Selection.Find.Execute Selection.GoTo What:=wdGoToBookmark, Name:="\Para" 'select paragraph Selection.Font.Color = wdColorGray40 'color paragraph Selection.MoveDown Unit:=wdParagraph, Count:=1 'move to next paragraph Wend End Sub Thanks for looking!

    Read the article

  • Getting jQuery to return an ajax object

    - by japancheese
    Hello, The question title is a bit strange because I'm not exactly sure how to phrase the problem. The issue is that I have many links to which I want to bind a click event with an ajax call, and I'm just looking to refactor some duplicate code into a single area. The links I'm trying to bind an ajax call only have one thing that differentiates them, and that's an id from a previously declared object. So I have lots of code that looks like this: $("a.link").bind('click', function() { id = obj.id; $.ajax({ url: "/set/" + id, dataType: 'json', type: "POST" }) }); I was trying to refactor it into something like this: $("a.link").bind('click', ajax_link(obj.id)); function ajax_link(id) { $.ajax({ url: "/set/" + id, dataType: 'json', type: "POST" }) }); However, as you can imagine, this just actually makes the ajax call when the element is binded with the click event. Is there an easy way to refactor this code so I can extract out the common ajax code into its own function, and hopefully reduce the number of lines of jQuery in my current script?

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >