Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 205/284 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • People not respecting good practices at workplace

    - by VexXtreme
    Hi There are some major issues in my company regarding practices, procedures and methodologies. First of all, we're a small firm and there are only 3-4 developers, one of which is our boss who isn't really a programmer, he just chimes in now and then and tries to do code some simple things. The biggest problems are: Major cowboy coding and lack of methodologies. I've tried explaining to everyone the benefits of TDD and unit testing, but I only got weird looks as if I'm talking nonsense. Even the boss gave me the reaction along the lines of "why do we need that? it's just unnecessary overhead and a waste of time". Nobody uses design patterns. I have to tell people not to write business logic in code behind, I have to remind them not to hardcode concrete implementations and dependencies into classes and cetera. I often feel like a nazi because of this and people think I'm enforcing unnecessary policies and use of design patterns. The biggest problem of all is that people don't even respect common sense security policies. I've noticed that college students who work on tech support use our continuous integration and source control server as a dump to store their music, videos, series they download from torrents and so on. You can imagine the horror when I realized that most of the partition reserved for source control backups was used by entire seasons of TV series and movies. Our development server isn't even connected to an UPS and surge protection. It's just plugged straight into the wall outlet. I asked the boss to buy surge protection, but he said it's unnecessary. All in all, I like working here because the atmosphere is very relaxed, money is good and we're all like a family (so don't advise me to quit), but I simply don't know how to explain to people that they need to stick to some standards and good practices in IT industry and that they can't behave so irresponsibly. Thanks for the advice

    Read the article

  • Powershell: Select-Object with additional calculated difference value

    - by David.Chu.ca
    Let me explain my question. I have a daily report about one PC's free space as in text file (sp.txt) like: DateTime FreeSpace(MB) ----- ------------- 03/01/2010 100.43 DateTime FreeSpace(MB) ----- ------------- 03/02/2010 98.31 .... Then I need to use this file as input to generate a report with difference of free space between dates: DateTime FreeSpace(MB) Change ----- ------------- ------ 03/01/2010 100.43 03/02/2010 98.31 2.12 .... Here are some codes I have to get above result without the additional "Change" calculation: Get-Content "C:\report\sp.txt" ` | where {$_.trim().length -gt 0 -and -not ($_ -like "Date*") -and -not ($_ -like "---*")} ` | Select-Object @{Name='DateTime'; expression={[DateTime]::Parse($_.SubString(0, 10).Trim())}}, ` @{Name='FreeSpace(MB)'; expression={$_.SubString(12, 12).Trim()}}, ` | Sort-Object DateTime ` | Select-Object DateTime, 'FreeSpace(MB)' |ft -AutoSize # how to add additional column Change to this Select-Object? My understanding is that Select-Object will return a collection of objects. Here I create this collection from an input text file(parse each line into parts: datetime and freespace(MB)). In order to calculate the difference between dates, I guess that I need to add additional calculated value to the result of the last Select-Object, or pipe the result to another Select-Object with the calculation as additional property or column. In order to do it, I need to get the previous row's FreeSpace value. Not sure if it is possible and how?

    Read the article

  • -[CFString length]: message sent to deallocated instance 0x3881940 when i scroll to near the bottom

    - by James
    Hey guys, i've got a problem debugging an iphone app that i'm attempting to write and it's got me stumped, bear with me, i'm a n00b to programming and might get some of the terminology wrong but i'll try to explain it as best as i can. The app gets an XML doc from the a web site, parses it into an array, and then displays it in a table view, i have the parser in a separate file. The ViewDidLoad in RootViewController sends it a url, the parser goes to work and then returns an NSMutableArray. When i run the app it works fine with small XML files (5 entries or so, and 1-3 sections), but when i use a larger one(20+ rows, over 12 sections) i get the error "-[CFString length]: message sent to deallocated instance 0x3881940" when i scroll near the bottom of the tableview, just as the last section title is about to come onto the viewable area on the screen to be precise. if i return a static string instead of the object in my array in this method it doesn't crash, but i can use NSLog to call the array and it returns the title no problems. - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)indexPath { return [[returnedEvents objectAtIndex:indexPath ] objectAtIndex:0]; } The returnedEvents array isn't released until -(void) dealloc {} I have read a few other posts on here, and a few guides on debugging and as of yet am unable to find anything that was able to help me, i'd be more than happy to post some code up here and any more information, i'm just not sure where to start... Thanks in advance for anyone willing to have a go at helping me out.

    Read the article

  • Extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I'm trying msidb.exe and orca.exe The documentation for orca states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines, usually starting with this: orca -x theDirectory theModule.msm I use "theDirectory" as whatever empty folder I want. Like I said - it didn't do anything. Then I tried msidb, where a couple of attempts I've made look like this: msidb -d theModule.msm -w {storage} msidb -d theModule.msm -x {stream} In both cases, I don't know what to insert for {storage} or {stream} to make it happy - I don't know what those represent. Can someone explain what I'm doing wrong with the command line options? Is there any other tool that can do this?

    Read the article

  • Tricky SQL query involving consecutive values

    - by Gabriel
    I need to perform a relatively easy to explain but (given my somewhat limited skills) hard to write SQL query. Assume we have a table similar to this one: exam_no | name | surname | result | date ---------+------+---------+--------+------------ 1 | John | Doe | PASS | 2012-01-01 1 | Ryan | Smith | FAIL | 2012-01-02 <-- 1 | Ann | Evans | PASS | 2012-01-03 1 | Mary | Lee | FAIL | 2012-01-04 ... | ... | ... | ... | ... 2 | John | Doe | FAIL | 2012-02-01 <-- 2 | Ryan | Smith | FAIL | 2012-02-02 2 | Ann | Evans | FAIL | 2012-02-03 2 | Mary | Lee | PASS | 2012-02-04 ... | ... | ... | ... | ... 3 | John | Doe | FAIL | 2012-03-01 3 | Ryan | Smith | FAIL | 2012-03-02 3 | Ann | Evans | PASS | 2012-03-03 3 | Mary | Lee | FAIL | 2012-03-04 <-- Note that exam_no and date aren't necessarily related as one might expect from the kind of example I chose. Now, the query that I need to do is as follows: From the latest exam (exam_no = 3) find all the students that have failed (John Doe, Ryan Smith and Mary Lee). For each of these students find the date of the first of the batch of consecutively failing exams. Another way to put it would be: for each of these students find the date of the first failing exam that comes after their last passing exam. (Look at the arrows in the table). The resulting table should be something like this: name | surname | date_since_failing ------+---------+-------------------- John | Doe | 2012-02-01 Ryan | Smith | 2012-01-02 Mary | Lee | 2012-01-04 Ann | Evans | 2012-02-03 How can I perform such a query? Thank you for your time.

    Read the article

  • How to write an asmx web service without using app_code directory?

    - by JL
    Excuse the title, but it's best I just explain the problem. I have 2 projects in my solution A Class Library A Web Application, which consists of a web service (asmx). the web service has code sitting in the app_code folder, with a file [webservicename].cs Inside the webservice code behind class, I have a web method here is a sample example (its simplified): [WebMethod] public EnumTaskExportState ProcessTask() { var tm = new UploadTaskManager(); return tm.ProcessTask(); } Now at design time, in visual studio (2010 or 2008), when I right click on UploadTaskMananger, and then select "Go to definition". I get taken to AppData\Temp[some folder structure]...etc.... and it displays the public class definition. Instead I would like to have complete integration, so that I get taken directly to the actual class in the class library project. My guess is, this is happening because I am using the app_code route, and not a compiled file for the web service class. But I don't know any other way to do this. How can I fix this? Possibly do away with the need for the app_code directory?

    Read the article

  • C# IsolatedStorage Not Working

    - by Don
    I am building a C# .NET (VS2010) IE8 add-on application but am having some trouble saving data using IsolatedStorage. No exceptions occur but, after writing the data, when I try to read the contents back it is blank, and I can find no evidence that it actually saved. Could anyone point out any problems with the following code please that would explain why it doesn't work? //Write IsolatedStorageFile app_isoStore = IsolatedStorageFile.GetStore( IsolatedStorageScope.User | IsolatedStorageScope.Assembly, null, null); IsolatedStorageFileStream isoStream = new IsolatedStorageFileStream( "app_started.txt", FileMode.OpenOrCreate, FileAccess.Write, app_isoStore); StreamWriter iswriter = new StreamWriter(isoStream); iswriter.WriteLine("Run"); iswriter.Close(); //app_isoStore.Dispose(); app_isoStore.Close(); //Read IsolatedStorageFile app_isoStoreCheck = IsolatedStorageFile.GetStore( IsolatedStorageScope.User | IsolatedStorageScope.Assembly, null, null); IsolatedStorageFileStream isoReadStream = new IsolatedStorageFileStream( "app_started.txt", FileMode.Open, FileAccess.Read, app_isoStoreCheck); StreamReader isreader = new StreamReader(isoReadStream); string rdata = isreader.ReadToEnd(); isreader.Close(); //app_isoStoreCheck.Dispose(); app_isoStoreCheck.Close(); Should I be using: IsolatedStorageFile.GetStore(IsolatedStorageScope.User| IsolatedStorageScope.Domain|IsolatedStorageScope.Assembly, null,null) instead of: IsolatedStorageFile.GetStore(IsolatedStorageScope.User| IsolatedStorageScope.Assembly, null, null) What's the difference between the two GetStore examples above please?

    Read the article

  • Multiple asynchronous method calls to method while in a loop

    - by ranabra
    I have spent a whole day trying various ways using 'AddOnPreRenderCompleteAsync' and 'RegisterAsyncTask' but no success so far. I succeeded making the call to the DB asynchronous using 'BeginExecuteReader' and 'EndExecuteReader' but that is missing the point. The asynch handling should not be the call to the DB which in my case is fast, it should be afterwards, during the 'while' loop, while calling an external web-service. I think the simplified pseudo code will explain best: (Note: the connection string is using 'MultipleActiveResultSets') "Select ID, UserName from MyTable" 'Open connection to DB ExecuteReader(); if (DR.HasRows) {     while (DR.Read())     {         'Call external web-service         'and get current Temperature of each UserName - DR["UserName"].ToString()         'Update my local DB         Update MyTable set Temperature = ValueFromWebService where UserName =                                       DR["UserName"]         CmdUpdate.ExecuteNonQuery();     }     'Close connection etc } Accessing the DB is fast. Getting the returned result from the external web-service is slow and that at least should be handled Asynchnously. If each call to the web service takes just 1 second, assuming I have only 100 users it will take minimum 100 seconds for the DB update to complete, which obviously is not an option. There eventually should be thousands of users (currently only 2). Currently everything works, just very synchnously :) Thoughts to myself: Maybe my way of approaching this is wrong? Maybe the entire process should be called Asynchnously Many thanx

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Ado JobStore use!

    - by aleo
    well i'm new in Quartz i'm following this tutorial and i configured my scheduler instance and quartz to use this properties: properties["quartz.jobStore.lockHandler.type"] = "Quartz.Impl.AdoJobStore.UpdateLockRowSemaphore, Quartz"; properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz"; properties["quartz.jobStore.dataSource"] = "default"; properties["quartz.dataSource.default.connectionString"] = "Server=loclahost;Initial Catalog=aleo;Persist Security Info=True;User ID=userid;Password=password"; properties["quartz.dataSource.default.provider"] = "SqlServer-20"; properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz"; properties["quartz.jobStore.useProperties"] = "true"; properties["quartz.jobStore.tablePrefix"] = "QRTZ_"; ISchedulerFactory schedFact = new Quartz.Impl.StdSchedulerFactory(properties); IScheduler sched = schedFact.GetScheduler(); sched.Start(); but what's next? i am new on C# but if someone explain to can understand :) and my question are how i will add jobs and triggers and stuff to the database? i also created the tables given in the Database/tables folder that comes with Quartz API thanks.

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

  • Creating a Custom Design-Time Environment

    - by Charlie
    Hello all, My question is related to the design-time support of WPF. From MSDN I read, The WPF Designer provides a framework and a public API which you can use to implement custom adorners, tools, property editors, and designers. But the vast majority of the examples I have found are trivial, and do not illustrate much concerning the creation of a customized designer in an existing WPF application. We have migrated our application from Windows Forms to WPF over the past year, and the next step will be to take an existing WinForms Panel designer, and rewrite it in WPF. Suffice it to say that this will be a huge project. But I don't even know where to begin. I am wondering if any of you have had similar experiences writing a customized designer for a WPF application, and what it was like. Even better, if you could compare and contrast the functionality between the WinForms designer and the WPF designer, or explain the transition from the former to the latter, that would be helpful. If you know of any simple examples that demonstrate a customized design environment (with custom controls, etc.) that would be extremely beneficial. All in all, I am just wondering if many people have undertaken this yet, and what their results have been. EDIT: To clarify, yes, I am talking about hosting a WPF designer. It appears that this may not even be possible, which is a huge setback. Here is a screenshot of our current WinForms designer. As you can see, it is used to create customized user interfaces. You can drag custom controls onto it and design them, then put the panel into a "run mode" in which all of the controls become functional. Short of spending months writing our designer, would this be possible in WPF? What about .NET 4.0 and VS2010? Will those add any designer functionality?

    Read the article

  • what type of bug causes a program to slowly use more processor power and all of a sudden go to 100%?

    - by reinier
    Hi, I was hoping to get some good ideas as to what might be causing a really nasty bug. This is a program which is transmitting data over a socket, and also receives messages back. I could explain lots more, but I don't think this will help here. I'm just searching for hypothetical problems which can cause the following behaviour: program runs processor time slowly accumulates (till around 60%) all of a sudden (could be after 30 but also after 60 seconds) the processor time shoots to 100%. the program halts completely In my syslog it always ends on one line with a memory allocation (something similar to: myArray = new byte[16384]) in the same thread. now here is the weird part: if I set the debugger anywhere...it immediately stops on that line. So just the act of setting a breakpoint, made the thread continue (it wasn't running since I saw no log output anymore) I was thinking 'deadlock' but that would not cause 100% processor power. If anything, the opposite. Also, setting a breakpoint would not cause a deadlock to end. anyone else a theoretical suggestion as to what kind of 'construct' might cause this effect? (apart from 'bad programming') ;^) thanks EDIT: I just noticed.... by setting the sendspeed slower, the problem shows itself much later than expected. I would think around the same amount of packets send...but no the amount of packets send is much higher this way before it has the same problem.

    Read the article

  • why does Integrated Windows Authentication fail when clients access off the network

    - by Bryan
    My background is not with web applications so this problem is hard for me to explain easily. First I'll try to describe the setup. Client setup:-Only browser that is effected is IE 6-8 (Firefox, chrome, opera, and safari all work fine) -A user will try to access our web application from a company laptop that is not connected to our network. -This machine will be a member of our workgroup and have the company DNS listed as a trusted intranet site. (to which the application in question would be a member) -The security logon mode is set to Automatic Logon only in intranet zone only, and IWA authentication is enabled on the clients browser.Server setup:-Windows server 2003 fp2-The application will first redirect to an Authorization asp page which has anonymous access disabled and IWA enabled in IIS.what should happen is that, since the client is not currently on the network, when this page is called it should prompt the user for network credentials. But with IE, instead of prompting, the user gets a page cannot be displayed error because the IIS manager is denying access to the asp page. If the company DNS is removed from the trusted intranet site list then it prompts correctly but disables single sign on the next time that computer is connected to the network or vpn. My assumption is that since IE uses IWA and the site is listed as an internal site, when no network is found IE just sends nulls to the server attempting to authenticate which is swiftly punted back. Other browsers do not have security zones so when network credentials are not present the server prompts for them. Is there a way to get around this so that our clients can keep the company DNS in the intranet zone but still have the server prompt for credentials when not on the network? Any attempt to allow for anonymous access on the asp page, as far as I know, will cause AUTH_USER to return null and again break SSO. I realize this is slightly rambling so I will do my best to clarify and questions you guys might have. Thanks in advance.

    Read the article

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • Why does the the Java VM not recover after "Too many open files" errors?

    - by Michael
    In certain well-understood circumstances, our application will open too many sockets (database connections) and reach the maximum open files that the OS allows. We understand this; we are fixing the issue and also bumping up the limit. What we can't explain is why parts of our application don't recover even after the number of connections abates and we're well within the limit. In this case, it's an application running under Tomcat. When this happens, we first start seeing "Too many open files" errors: SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310) at java.lang.Thread.run(Thread.java:619) Eventually, we start seeing NoClassDefFoundErrors inside an application thread that's trying to open HTTP connections: java.lang.NoClassDefFoundError: org/apache/commons/httpclient/protocol/ControllerThreadSocketFactory at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:128) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1349) [...] Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.protocol.ControllerThreadSocketFactory at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 8 more When the errant connections go away, the server starts accepting connections again, and everything seems ok, but we're left with the latter error constantly being spewed to stderr. Although the application typically logs unloaded classes to stdout, I don't see any such logs just before, during or after the "Too many open files" errors. My initial theory was that the Hotspot JVM would unload seemingly unused classes when it encounters "Too many open files," but if so, it doesn't log the fact. I'd also expect it to recover if that were the case. Platform details: Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode) Apache Tomcat Version 6.0.18

    Read the article

  • Problem with JSONResult

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists.count); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch/", { ajax: true, phylumID: phylum, kingdom: kingdom }, function(data) { alert(data); alert('no fallo') document.getElementById("pClass").style.display = "block"; document.getElementById("sClass").options[0] = new Option("-select-", "0", true, true); //for (i = 0; i < data.length; i++) { // $('#sClass').addOption(data[i].classID, data[i].className); //} }); } The thing is that just like this it works, I pass the function the number of classes within a selected phylum, and it displays the pclass element, the problem gets when I try to populate the slist with data (which should contain the objects retrieved from the database), because when there is data returned by the database changing return Json(lists) instead of return Json(lists.count) I keep getting the same error: A circular reference was detected while serializing an object of type 'SubSonic.Schema.DatabaseColumn'. I've been going round and round debugging and making tests but I can't make it work, and it is suppossed to be a simple thing, but I'm missing something. I have commented the for loop because I'm not quite sure if that's the way you access the data, because I've not been able to make it work when it finds records. Can anyone help me? Thanks in advance, Victor

    Read the article

  • Design/Architecture Advice Needed

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Android media thumbnails. Serious issues?

    - by Ralphleon
    I've been playing with android's thumbnails for a while now, and I've seen some inconsistencies that make me want to scream. My goal is to have a simple list of all Images (and a separate list for video) with the thumbnail and filename. Device: HTC Evo (fresh from Google I/o) First off: http://androidsamples.blogspot.com/2009/06/how-to-display-thumbnails-of-images.html That code doesn't seem to work at all, thumbnails are duplicated... some with the "mirror" effect and some without. Also some won't load and just display a black square. I've tried rebuilding the thumbnails by deleting the "alblum thumbs" directory from the SD card. HTC's gallery application seem to show everything fine. This approach seems to work: Bitmap thumb = MediaStore.Images.Thumbnails.getThumbnail( getContentResolver(), id, MediaStore.Video.Thumbnails.MICRO_KIND, null); imageView.setImageBitmap(curThumb); where id is the original images id and imageView is some image view. This is great! But, strangely, way too slow to be used inside a SimpleViewBinder. Next approach: String [] proj = {MediaStore.Images.Thumbnails._ID}; Cursor c = managedQuery(MediaStore.Images.Thumbnails.EXTERNAL_CONTENT_URI, proj, MediaStore.Images.Thumbnails.IMAGE_ID + "=" +id , null, null); if (c != null && c.moveToFirst()) { Uri thumb = Uri.withAppendedPath(mThumbUri,c.getLong(0)+""); imageView.setImageURI(thumb); } I should explain that I feel the needed WHERE condition is required because there doesn't seem to be any guarantee that your uri will have the same ID for both a thumbnail and its parent image. This works for all of the current images, but as soon as I start adding pictures with the camera they show up as blank! Debugging shows a dreaded: SkImageDecoder::Factory returned null error and the URI is returned as invalid. These are the same images that work with the previous call. Can anyone either catch my logical failure or point me to some working code?

    Read the article

  • Reflective Generic Detection

    - by Aren B
    Trying to find out if a provided Type is of a given generic type (with any generic types inside) Let me Explain: bool IsOfGenericType(Type baseType, Type sampleType) { /// ... } Such that: IsOfGenericType(typeof(Dictionary<,>), typeof(Dictionary<string, int>)); // True IsOfGenericType(typeof(IDictionary<,>), typeof(Dictionary<string, int>)); // True IsOfGenericType(typeof(IList<>), typeof(Dictionary<string,int>)); // False However, I played with some reflection in the intermediate window, here were my results: typeof(Dictionary<,>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>).IsAssignableFrom(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsSubclassOf(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsInstanceOfType(typeof(Dictionary<,>)) false typeof(Dictionary<,>).IsInstanceOfType(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsAssignableFrom(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsSubclassOf(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>).IsAssignableFrom(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsSubclassOf(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsInstanceOfType(typeof(Dictionary<,>)) false typeof(Dictionary<,>).IsInstanceOfType(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsAssignableFrom(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsSubclassOf(typeof(Dictionary<string,int>)) false So now I'm at a loss because when you look at the base.Name on typeof(Dictionary) you get Dictionary`2 Which is the same as typeof(Dictionary<,>).Name

    Read the article

  • Applying business logic to form elements in ASP.NET MVC

    - by Brettski
    I am looking for best practices in applying business logic to form elements in an ASP.NET MVC application. I assume the concepts would apply to most MVC patterns. The goal is to have all the business logic stem from the same place. I have a basic form with four elements: Textbox: for entering data Checkbox: for staff approval Checkbox: for client approval Button: for submitting form The textbox and two check boxes are fields in a database accessed using LINQ to SQL. What I want to do is put logic around the check boxes on who can check them and when. True table (little silly but it's an example): when checked || may check Staff || may check Client Staff | Client || Staff | Client || Staff | Client 0 0 || 1 0 0 1 0 1 || 0 0 0 1 1 0 || 1 0 0 1 1 1 || 0 0 0 1 There are to security roles, staff and client; a person's role determines who they are, the roles are maintained in the database alone with current state of the check boxes. So I can simply store the users roll in the view class and enable and disable check boxes based on their role, but this doesn't seem proper. That is putting logic in UI to control of which actions can be taken. How do I get most of this control down into the model? I mean I need to control which check boxes are enabled and then check the results in the model when the form is posted, so it seems the best place for it to originate. I am looking for a good approach to constructing this, something to follow as I build the application. If you know of some great references which explain these best practices that is really appreciated too.

    Read the article

  • Cookies not present after using XMLHttpRequest

    - by Joe B
    I'm trying to make a bookmarklet to download videos off of YouTube, but I've come across a little problem. To detect the highest quality video available, I use a sort of brute force method, in which I make requests using the XMLHttpRequest object until a 404 isn't returned (I can't do it until a 200 ok is returned because YouTube redirects to a different server if the video is available, and the cross-domain policy won't allow me to access any of that data). Once a working URL is found, I simply set window.location to the URL and the download should start, right? Wrong. A request is made, but for reasons unknown to me, the cookies are stripped and YouTube returns a 403 access denied. This does not happen if the XML requests aren't made before it, i.e. if I just set the window.location to the URL everything works fine, it's when I do the XMLHttpRequest that the cookies aren't sent. It's hard to explain so here's the script: var formats = ["37", "22", "35", "34", "18", ""]; var url = "/get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) + "&fmt="; for (var i = 0; i < formats.length; i++) { xmlhttp = new XMLHttpRequest; xmlhttp.open("HEAD", url + formats[i], false); xmlhttp.send(null); if (xmlhttp.status != 404) { document.location = url + formats[i]; break } } That script does not send the cookies after setting the document.location and thus does not work. However, simply doing this: document.location = /get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) DOES send the cookies along with the request, and does work. The only downside is I can't automatically detect the highest quality, I just have to try every "fmt" parameter manually until I get it right. So my question is: why is the XMLHttpRequest object removing cookies from subsequent requests? This is the first time I've ever done anything in JS by the way, so please, go easy on me. ;)

    Read the article

  • Typedef and Struct in C and H files

    - by Leif Andersen
    I've been using the following code to create various struct, but only give people outside of the C file a pointer to it. (Yes, I know that they could potentially mess around with it, so it's not entirely like the private keyword in Java, but that's okay with me). Anyway, I've been using the following code, and I looked at it today, and I'm really surprised that it's actually working, can anyone explain why this is? In my C file, I create my struct, but don't give it a tag in the typedef namespace: struct LABall { int x; int y; int radius; Vector velocity; }; And in the H file, I put this: typedef struct LABall* LABall; I am obviously using #import "LABall.h" in the c file, but I am NOT using #import "LABall.c" in the header file, as that would defeat the whole purpose of a separate header file. So, why am I able to create a pointer to the LABall* struct in the H file when I haven't actually included it? Does it have something to do with the struct namespace working accross files, even when one file is in no way linked to another? Thank you.

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >