Search Results

Search found 6169 results on 247 pages for 'future proof'.

Page 200/247 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Regressing panel data in SAS.

    - by John
    Hey Guys, thanks to your help I succesfully managed all my databases! I am now looking at a panel data set on which I have to regress. Since I only started my Phd this semester together with the econometrics courses I am still new to many statistic applications and regression methods. I want to do a simple regression as in Y = x1 x2 x3 etc, now I already browsed through some literature and found that for panel data it's common to do a fixed effects regression. Also, my Y variable only has positive values so I was thinking in the direction of a Tobit model? I'm doing some research concerning the coverage of analysts in the financial business. My independent variable is the coverage of analysts on a certain firm, so per observation i have 1 analyst and 1 firm, together with different characteristics(market cap and betas etc) of the firm. All this data is monthly. As coverage cannot become negative (only 0) I was thinking of a Tobit model? Do you guys have any ideas what would be a good regression method? Or have some good sources (e books, written books, through university I have access to almost anything concerning my field of work) of information (cause I do have to learn these things for future research)? Thanks!

    Read the article

  • Convert array to CSV/TSV-formated string in Python.

    - by dreeves
    Python provides csv.DictWriter for outputting CSV to a file. What is the simplest way to output CSV to a string or to stdout? For example, given a 2D array like this: [["a b c", "1,2,3"], ["i \"comma-heart\" you", "i \",heart\" u, too"]] return the following string: "a b c, \"1, 2, 3\"\n\"i \"\"comma-heart\"\" you\", \"i \"\",heart\"\" u, too\"" which when printed would look like this: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" (I'm taking csv.DictWriter's word for it that that is in fact the canonical way to output that array as CSV. Excel does parse it correctly that way, though Mathematica does not. From a quick look at the wikipedia page on CSV it seems Mathematica is wrong.) One way would be to write to a temp file with csv.DictWriter and read it back with csv.DictReader. What's a better way? TSV instead of CSV It also occurs to me that I'm not wedded to CSV. TSV would make a lot of the headaches with delimiters and quotes go away: just replace tabs with spaces in the entries of the 2D array and then just intersperse tabs and newlines and you're done. Let's include solutions for both TSV and CSV in the answers to make this as useful as possible for future searchers.

    Read the article

  • Inheritance of list-style-type property in Firefox (bug in Firebug?)

    - by Marcel Korpel
    Let's have a look at some comments on a page generated by Wordpress (it's not a site I maintain, I'm just wondering what's going on here). As these pages might disappear in the near future, I've put some screenshots online. Here's what I saw: Obviously, the list-item markers shouldn't be there. So I decided to look at the source using Firebug. As you can see, Firebug claims that the list-style property (containing none) is inherited from ol.commentlist. But if that's the case, why are the circle and the square visible? When checking the computed style, Firebug shows the list-style-types correctly. What's the correct behaviour? I just did a quick check in Chromium, whose Web Inspector gave a better view of reality (the list item markers were also displayed in this browser): According to WebKit, list-style of ol.commentlist isn't inherited, only the default value of list-style-type from the rendering engine. So, we may conclude that the output of both browsers is correct and that Firefox (Firebug) shows an incorrect representation of inherited styles. What does the CSS specification say? Inheritance will transfer the list-style values from OL and UL elements to LI elements. This is the recommended way to specify list style information. Not much about the inheritance of ol properties to uls. Is Firebug wrong in this respect? BTW, I managed to let the markers disappear by just changing line 312 of style.css to ol.commentlist, li.commentlist, ul.children { When also explicitly defining the list-style of ul.children to none, the markers are not painted. You can have a look at screenshots of Firebug and WebKit's Web Inspector in this case, if you like.

    Read the article

  • What headaches should I expect from using Trac?

    - by Dolph Mathews
    No tool is perfect, and I'm about to start several long-term projects using Trac, and wanted a heads up of the kinds of problems I may or may not experience with it. In other words, Trac meets my needs in the short term, and I've already made the decision to use it, but I want to know what to expect down the road. I am not looking for: "Use product X instead of Trac because..." answers. "Trac is great because..." answers. A comparison to any other specific system. "Trac doesn't support Feature X" answers. I can read the feature list too, thank you very much. I am looking for: "Feature X does not behave as expected..." "Trac behaves oddly when..." "Trac doesn't fully support..." "Trac itself has a known bug that will likely never be fixed..." And especially "Trac can't handle..." etc So, what Trac-induced headaches do I have to look forward to? For future reference, this question was asked while Trac v0.11 was the latest stable release.

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

  • Represent multiple Null/Generic objects in an ActiveRecord association?

    - by slothbear
    I have a Casefile model that belongs_to a Doctor. In additional to all the "real" doctors, there are several generic Doctors: "self-treated", "not specified", and "removed" (it used to have a real doctor, but no longer does). I suspect there will be even more generic values in the future. I started with special "doctors" in the database, generated from seed. The generic Doctors only need to respond to the "name" and "real_doctor?" methods. This worked with one, was strained with two, and now feels completely broken. I want to change the behavior and can't figure out how to test it, a bad sign. Creating all the generic objects for testing is also trouble, including fake values to pass validation of the required Doctor attributes. The Null Object pattern works well for one generic object. The "name" method could check casefile.doctor.nil? and return "self-treated", as demonstrated by Craig Ambrose. What pattern should I use when there are multiple generic objects with very limited state?

    Read the article

  • How to setup a Zend_Application with an application.ini and a user.ini

    - by Peter Smit
    I am using Zend_Application and it does not feel right that I am mixing in my application.ini both application and user configuration. What I mean with this is the following. For example, my application needs some library classes in the namespace MyApp_ . So in application.ini I put autoloaderNamespaces[] = "MyApp_". This is pure application configuration, no-one except a programmer would change these. On the other hand I put there a database configuration, something that a SysAdmin would change. My idea is that I would split options between an application.ini and an user.ini, where the options in user.ini take preference (so I can define standard values in application.ini). Is this a good idea? How can I best implement this? The idea's I have are Extending Zend_Application to take multiple config files Making an init function in my Bootstrap loading the user.ini Parsing the config files in my index.php and pass these to Zend_Application (sounds ugly) What shall I do? I would like to have the 'cleanest' solution, which is prepared for the future (newer ZF versions, and other developers working on the same app)

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • REST API error return good practices

    - by Remus Rusanu
    I'm looking for guidance on good practices when it comes to return errors from a REST API. I'm working on a new API so I can take it any direction right now. My content type is XML at the moment, but I plan to support JSON in future. I am now adding some error cases, like for instance a client attempts to add a new resource but has exceeded his storage quota. I am already handling certain error cases with HTTP status codes (401 for authentication, 403 for authorization and 404 for plain bad request URIs). I looked over the blessed HTTP error codes but none of the 400-417 range seems right to report application specific errors. So at first I was tempted to return my application error with 200 OK and a specific XML payload (ie. Pay us more and you'll get the storage you need!) but I stopped to think about it and it seems to soapy (/shrug in horror). Besides it feels like I'm splitting the error responses into distinct cases, as some are http status code driven and other are content driven. So what is the SO crowd recommendation? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?

    Read the article

  • mysql prevent displaying a row ONE which has reference in another row TWO but no reference in row THREE

    - by Jayapal Chandran
    I have a table like the following id | name | pid 1 | sam | NULL 2 | sams ref | 1 3 | pam | NULL For the first time the first row gets inserted which will have pid as null I insert a row which is related to the first row and then i insert a row which is new and which may be referred by another row in future. now i want only the third row to be displayed and not the first and second row as the second row contains the reference of first row. so if any row has a reference to another row then both the rows should not be displayed. Only rows which is not having any reference should be displayed. BESIDES, IS IT A GOOD PRACTICE? PLEASE ADVICE ON THIS. Edited When i updated in server the query is always giving empty result. here is what i have and this one When pid is NULL then that row should appear but when another entry in the same table with pid as its parent id or any other rows id appears then both the rows should not appear. so if any pid has been referred then both the rows should not appear. here only one row will refer another row and not more than that. in my localhost i have mysql version 5.0.1 or something like that but when i installed xampp in another system it had 5.5 and in the live server it was 5.3 so in version around 5.0 the query is returning rows but in higher versions it is returning empty rows. so now i this case how to make a query?

    Read the article

  • Parse XML function names and call within whole assembly

    - by Matt Clarkson
    Hello all, I have written an application that unit tests our hardware via a internet browser. I have command classes in the assembly that are a wrapper around individual web browser actions such as ticking a checkbox, selecting from a dropdown box as such: BasicConfigurationCommands EventConfigurationCommands StabilizationCommands and a set of test classes, that use the command classes to perform scripted tests: ConfigurationTests StabilizationTests These are then invoked via the GUI to run prescripted tests by our QA team. However, as the firmware is changed quite quickly between the releases it would be great if a developer could write an XML file that could invoke either the tests or the commands: <?xml version="1.0" encoding="UTF-8" ?> <testsuite> <StabilizationTests> <StressTest repetition="10" /> </StabilizationTests> <BasicConfigurationCommands> <SelectConfig number="2" /> <ChangeConfigProperties name="Weeeeee" timeOut="15000" delay="1000"/> <ApplyConfig /> </BasicConfigurationCommands> </testsuite> I have been looking at the System.Reflection class and have seen examples using GetMethod and then Invoke. This requires me to create the class object at compile time and I would like to do all of this at runtime. I would need to scan the whole assembly for the class name and then scan for the method within the class. This seems a large solution, so any information pointing me (and future readers of this post) towards an answer would be great! Thanks for reading, Matt

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • Multiple Table Inheritance vs. Single Table Inheritance in Ruby on Rails

    - by Tony
    I have been struggling for the past few hours thinking about which route I should go. I have a Notification model. Up until now I have used a notification_type column to manage the types but I think it will be better to create separate classes for the types of notifications as they behave differently. Right now, there are 3 ways notifications can get sent out: SMS, Twitter, Email Each notification would have: id subject message valediction sent_people_count deliver_by geotarget event_id list_id processed_at deleted_at created_at updated_at Seems like STI is a good candidate right? Of course Twitter/SMS won't have a subject and Twitter won't have a sent_people_count, valediction. I would say in this case they share most of their fields. However what if I add a "reply_to" field for twitter and a boolean for DM? My point here is that right now STI makes sense but is this a case where I may be kicking myself in the future for not just starting with MTI? To further complicate things, I want a Newsletter model which is sort of a notification but the difference is that it won't use event_id or deliver_by. I could see all subclasses of notification using about 2/3 of the notification base class fields. Is STI a no-brainer, or should I use MTI? Thanks!

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • PInvokeStackImbalance -- C# with offreg.dll ( windows ddk7 )

    - by user301185
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempted to create the hive using ORCreateHive, I receive the following error: "Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature." Here is the offreg.h file containing ORCreateHive: typedef PVOID ORHKEY; typedef ORHKEY *PORHKEY; VOID ORAPI ORGetVersion( __out PDWORD pdwMajorVersion, __out PDWORD pdwMinorVersion ); DWORD ORAPI OROpenHive ( __in PCWSTR lpHivePath, __out PORHKEY phkResult ); DWORD ORAPI ORCreateHive ( __out PORHKEY phkResult ); DWORD ORAPI ORCloseHive ( __in ORHKEY Handle ); The following is my C# code attempting to call the .dll and create the pointer for future use. using System.Runtime.InteropServices; namespace WindowsFormsApplication6 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORCreateHive", SetLastError=true, CallingConvention = CallingConvention.StdCall)] public static extern IntPtr ORCreateHive2(); private void button1_Click(object sender, EventArgs e) { try { IntPtr myHandle = ORCreateHive2(); } catch (Exception r) { MessageBox.Show(r.ToString()); } } } } I have been able to create pointers in the past with no issue utilizing user32.dll, icmp.dll, etc. However, I am having no such luck with offreg.dll. Thank you.

    Read the article

  • Getting an invalidoperationexception when deserialising XML

    - by Paul Johnson
    Hi, I'm writing a simple proof of concept application to load up an XML file and depending on the very simple code, create a window and put something into it (it's for a much larger project). Due to limitations in Mono, I'm having to run in this way. The code I currently have looks like this using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Windows.Forms; using System.IO; using System.Collections; using System.Xml; using System.Xml.Serialization; namespace form_from_xml { public class xmlhandler : Form { public void loaddesign() { FormData f; f = null; try { string path_env = Path.GetDirectoryName(Application.ExecutablePath) + Path.DirectorySeparatorChar; // code dies on the line below XmlSerializer s = new XmlSerializer(typeof(FormData)); TextReader r = new StreamReader(path_env + "designer-test.xml"); f = (FormData)s.Deserialize(r); r.Close(); } catch (System.IO.FileNotFoundException) { MessageBox.Show("Unable to find the form file", "File not found", MessageBoxButtons.OK); } } } [XmlRoot("Forms")] public class FormData { private ArrayList formData; public FormData() { formData = new ArrayList(); } [XmlElement("Element")] public Elements[] elements { get { Elements[] elements = new Elements[formData.Count]; formData.CopyTo(elements); return elements; } set { if (value == null) return; Elements[] elements = (Elements[])value; formData.Clear(); foreach (Elements element in elements) formData.Add(element); } } public int AddItem(Elements element) { return formData.Add(element); } } public class Elements { [XmlAttribute("formname")] public string name; [XmlAttribute("winxsize")] public int winxs; [XmlAttribute("winysize")] public int winys; [XmlAttribute("type")] public object type; [XmlAttribute("xpos")] public int xpos; [XmlAttribute("ypos")] public int ypos; [XmlAttribute("externaldata")] public bool external; [XmlAttribute("externalplace")] public string externalplace; [XmlAttribute("text")] public string text; [XmlAttribute("questions")] public bool questions; [XmlAttribute("questiontype")] public object qtype; [XmlAttribute("numberqs")] public int numberqs; [XmlAttribute("answerfile")] public string ansfile; [XmlAttribute("backlink")] public int backlink; [XmlAttribute("forwardlink")] public int forwardlink; public Elements() { } public Elements(string fn, int wx, int wy, object t, int x, int y, bool ext, string extpl, string te, bool q, object qt, int num, string ans, int back, int end) { name = fn; winxs = wx; winys = wy; type = t; xpos = x; ypos = y; external = ext; externalplace = extpl; text = te; questions = q; qtype = qt; numberqs = num; ansfile = ans; backlink = back; forwardlink = end; } } } With a very simple xmlhandler xml = new xmlhander(); xml.loaddesign(); attached to a winform button. Everything is in the same namespace and the xml file actually exists. This is annoying me now - can anyone spot the error of my ways? Paul

    Read the article

  • SQL Server 2008: CASE vs IF-ELSE-IF vs GOTO

    - by Saharsh Shah
    I have some rules in my application and I have written the business logic of that rules in my procedure. At the time of creation of procedure I came to know that CASE statement won't work in my scenario. So I have tried two ways to perform same operations (using IF-ELSE-IF or GOTO) shown as below. Method 1 Using IF-ELSE-IF conditions: DECLARE @V_RuleId SMALLINT; IF (@V_RuleId = 1) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 2) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 3) BEGIN /*My business logic*/ END /* ... ... ... ...*/ ELSE IF (@V_RuleId = 19) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 20) BEGIN /*My business logic*/ END Method 2 Using GOTO statement: DECLARE @V_RuleId SMALLINT, @V_Temp VARCHAR(100); SET @V_Temp = 'GOTO RULE' + CONVERT(VARCHAR, @V_RuleId); EXECUTE sp_executesql @V_Temp; RULE1: BEGIN /*My business logic*/ END RULE2: BEGIN /*My business logic*/ END RULE3: BEGIN /*My business logic*/ END /* ... ... ... ...*/ RULE19: BEGIN /*My business logic*/ END RULE20: BEGIN /*My business logic*/ END Today I have 20 rules. It can be increase to any number in future. If I can able to use CASE statement then I have not any problem with performance, but I can't do that so I am worried about the performance of my procedure. Also one thing to be noticed that this procedure will execute very frequently by application. My questions are: Is there any way to use CASE statement in my procedure? If not, which method is best to use in my procedure to improve the performance of my code? Thanks in advance...

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • Finding the last focused element

    - by Joshua Cody
    I'm looking to determine which element had the last focus in a series of inputs, that are added dynamically by the user. This code can only get the inputs that are available on page load: $('input.item').focus(function(){ $(this).siblings('ul').slideDown(); }); And this code sees all elements that have ever had focus: $('input.item').live('focus', function(){ $(this).siblings('ul').slideDown(); }); The HTML structure is this: <ul> <li><input class="item" name="goals[]"> <ul> <li>long list here</li> <li>long list here</li> <li>long list here</li> </ul></li> </ul> <a href="#" id="add">Add another</a> On page load, a single input loads. Then with each add another, a new copy of the top unordered list's contents are made and appended, and the new input gets focus. When each gets focus, I'd like to show the list beneath it. But I don't seem to be able to "watch for the most recently focused element, which exists now or in the future." To clarify: I'm not looking for the last occurrence of an element in the DOM tree. I'm looking to find the element that currently has focus, even if said element is not present upon original page load. So in the above image, if I were to focus on the second element, the list of words should appear under the second element. My focus is currently on the last element, so the words are displayed there. Do I have some sort of fundamental assumption wrong?

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Java: how to do fast copy of a BufferedImage's pixels? (include unit test)

    - by WizardOfOdds
    I want to do a copy (of a rectangle area) of the ARGB values from a source BufferedImage into a destination BufferedImage. No compositing should be done: if I copy a pixel with an ARGB value of 0x8000BE50 (alpha value at 128), then the destination pixel must be exactly 0x8000BE50, totally overriding the destination pixel. I've got a very precise question and I made a unit test to show what I need. The unit test is fully functional and self-contained and is passing fine and is doing precisely what I want. However, I want a faster and more memory efficient method to replace copySrcIntoDstAt(...). That's the whole point of my question: I'm not after how to "fill" the image in a faster way (what I did is just an example to have a unit test). All I want is to know what would be a fast and memory efficient way to do it (ie fast and not creating needless objects). The proof-of-concept implementation I've made is obviously very memory efficient, but it is slow (doing one getRGB and one setRGB for every pixel). Schematically, I've got this: (where A indicates corresponding pixels from the destination image before the copy) AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA And I want to have this: AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAAAAAAAAA where 'B' represents the pixels from the src image. I'm looking for an exact replacement of the method, not for an API link/quote. import org.junit.Test; import java.awt.image.BufferedImage; import static org.junit.Assert.*; public class TestCopy { private static final int COL1 = 0x8000BE50; // alpha at 128 private static final int COL2 = 0x1732FE87; // alpha at 23 @Test public void testPixelsCopy() { final BufferedImage src = new BufferedImage( 5, 5, BufferedImage.TYPE_INT_ARGB ); final BufferedImage dst = new BufferedImage( 20, 20, BufferedImage.TYPE_INT_ARGB ); convenienceFill( src, COL1 ); convenienceFill( dst, COL2 ); copySrcIntoDstAt( src, dst, 3, 4 ); for (int x = 0; x < dst.getWidth(); x++) { for (int y = 0; y < dst.getHeight(); y++) { if ( x >= 3 && x <= 7 && y >= 4 && y <= 8 ) { assertEquals( COL1, dst.getRGB(x,y) ); } else { assertEquals( COL2, dst.getRGB(x,y) ); } } } } // clipping is unnecessary private static void copySrcIntoDstAt( final BufferedImage src, final BufferedImage dst, final int dx, final int dy ) { // TODO: replace this by a much more efficient method for (int x = 0; x < src.getWidth(); x++) { for (int y = 0; y < src.getHeight(); y++) { dst.setRGB( dx + x, dy + y, src.getRGB(x,y) ); } } } // This method is just a convenience method, there's // no point in optimizing this method, this is not what // this question is about private static void convenienceFill( final BufferedImage bi, final int color ) { for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { bi.setRGB( x, y, color ); } } } }

    Read the article

  • ADO.NET zombie transaction bug? How to ensure that commands will not be executed on implicit transac

    - by TN
    e.g. When deadlock occurs, following SQL commands are successfully executed, even if they have assigned SQL transaction that is after rollback. It seems, it is caused by a new implicit transaction that is created on SQL Server. Someone could expect that ADO.NET would throw an exception that the commands are being executed on a zombie transaction. However, such exception is not thrown. (I think this is a bug in ASP.NET.) Moreover, because of zombie transaction the final Dispose() silently ignores the rollback. Any ideas, how can I ensure that nobody can execute commands on implicit transaction? Or, how to check that transaction is zombie? I found that Commit() and Rollback() check for zombie transaction, however I can call them for a test:) I also found that also reading IsolationLevel will do the check, but I am not sure whether simple calling transaction.IsolationLevel.ToString(); will not be removed by a future optimizer. Or do you know any other safe way invoke a getter (without using reflection or IL emitting)?

    Read the article

  • Javascript: Multiple mouseout events triggered

    - by Channel72
    I'm aware of the different event models in Javascript (the WC3 model versus the Microsoft model), as well as the difference between bubbling and capturing. However, after a few hours reading various articles about this issue, I'm still unsure how to properly code the following seemingly simple behavior: If I have an outer div and an inner div element, I want a single mouse-out event to be triggered when the mouse leaves the outer-div. When the mouse crosses from the inner-div to the outer-div, nothing should happen, and when the mouse crosses from the outer-div to the inner-div nothing should happen. The event should only fire if the mouse moves from the outer-div to the surrounding page. <div id="outer" style = "width:20em; height:20em; border:1px solid #F00" align = "center" onmouseout="alert('mouseout event!')" > <div id="inner" style = "width:18em; height:18em; border:1px solid #000"></div> </div> Now, if I place the "mouseout" event on the outer-div, two mouse-out events are fired when the mouse moves from the inner-div to the surrounding page, because the event fires once when the mouse moves from inner to outer, and then again when it moves from outer to the surrounding page. I know I can cancel the event using ev.stopPropagation(), so I tried registering an event handler with the inner-div to cancel the event propagation. However, this won't prevent the event from firing when the mouse moves from the outer-div to the inner-div. So, unless I'm overlooking something, it seems to me this behavior can't be accomplished without complex mouse-tracking functions. In the future, I plan to reimplement a lot of this code using a more advanced framework, like JQuery, but for now, I'm wondering if there is a simple way to implement the above behavior in regular Javascript.

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • deadlock because of foreign key?

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I met with deadlock in the following store procedure, but because of my fault, I did not record the deadlock graph. But now I can not reproduce deadlock issue. I want to have a postmortem to find the root cause of deadlock to avoid deadlock in the future. The deadlock happens on delete statement. For the delete statement, Param1 is a column of table FooTable, Param1 is a foreign key of another table (refers to another primary key clustered index column of the other table). There is no index on Param1 itself for table FooTable. FooTable has another column which is used as clustered primary key, but not Param1 column. Here is my guess why there is deadlock, and I want to let people review whether my analysis is correct? Since Param1 column has no index, there will be a table scan, and will acquire table level lock, because of foreign key, the delete operation will also need to check master table (e.g. to acquire lock on master table); Some operation on master table acquires master table lock, but want to acquire lock on FooTable; (1) and (2) cause cycle lock which makes deadlock happen. My analysis correct? Any reproduce scenario? create PROCEDURE [dbo].[FooProc] ( @Param1 int ,@Param2 int ,@Param3 int ) AS DELETE FooTable WHERE Param1 = @Param1 INSERT INTO FooTable ( Param1 ,Param2 ,Param3 ) VALUES ( @Param1 ,@Param2 ,@Param3 ) DECLARE @ID bigint SET @ID = ISNULL(@@Identity,-1) IF @ID > 0 BEGIN SELECT IdentityStr FROM FooTable WHERE ID = @ID END thanks in advance, George

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >