Search Results

Search found 4153 results on 167 pages for 'rave reports'.

Page 144/167 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Fixing parent controller's elements after screen orientation

    - by Jonas Anderson
    I have a tab bar application with mixed orientation support for only some views. One of the child view controller shown from one of the tab's navigation controller is displayed only in Landscape mode. In order to accomplish this, I've done the view transformation for the child view as suggested here: Is there a documented way to set the iPhone orientation? The only problem I'm seeing is that after I've performed the orientation adjustment for the child controller and then readjusted orientation back to normal on its dismissal, the contents of the (parent) navigation controller is still shown with Landscape mode dimensions despite the navigation controller reporting the correct value for the interfaceOrientation. How do I ensure that view's size is reset to match the orientation without hardcoding screen dimensions? I have the following in the root navigation controller's viewWillAppear (invoked after the child controller is dismissed): - (void)viewWillAppear:(BOOL)animated { NSLog(@"viewFrame: (%2f, %2f), width: %2f, height: %2f\n", self.view.frame.origin.x, self.view.frame.origin.y, self.view.frame.size.width, self.view.frame.size.height); // Frame values are (0, 0) for (x,y) width: 320, height: 367 before I // displayed child controller. // Frame values are (0,0) width: 480, height: 219 after returning from child // controller -- still has the landscape dimensions NSLog(@"orientation: %d", self.interfaceOrientation); // reports portrait as expected } I've tried to invoke 'layoutIfNeeded' as well as 'setNeedsDisplay' on the view but neither of them bring the view contents into the correct display. Any suggestions would be greatly appreciated.

    Read the article

  • Mac OS X and static boost libs -> std::string fail

    - by Ionic
    Hi all, I'm experiencing some very weird problems with static boost libraries under Mac OS X 10.6.6. The error message is main(78485) malloc: *** error for object 0x1000e0b20: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug [1] 78485 abort (core dumped) and a tiny bit of example code which will trigger this problem: #define BOOST_FILESYSTEM_VERSION 3 #include <boost/filesystem.hpp> #include <iostream> int main (int argc, char **argv) { std::cout << boost::filesystem::current_path ().string () << '\n'; } This problem always occurs when linking the static boost libraries into the binary. Linking dynamically will work fine, though. I've seen various reports for quite a similar OS X bug with GCC 4.2 and the _GLIBCXX_DEBUG macro set, but this one seems even more generic, as I'm neither using XCode, nor setting the macro (even undefining it does not help. I tried it just to make sure it's really not related to this problem.) Does anybody have any pointers to why this is happening or even maybe a solution (rather than using the dynamic library workaround)? Best regards, Mihai

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Linq In Clause & Predicate building

    - by Michael G
    I have two tables. Report and ReportData. ReportData has a constraint ReportID. How can I write my linq query to return all Report objects where the predicate conditions are met for ReportData? Something like this in SQL: SELECT * FROM Report as r Where r.ServiceID = 3 and r.ReportID IN (Select ReportID FROM ReportData WHERE JobID LIKE 'Something%') This is how I'm building my predicate: Expression<Func<ReportData, bool>> predicate = PredicateBuilder.True<ReportData>(); predicate = predicate.And(x => x.JobID.StartsWith(QueryConfig.Instance.DataStreamName)); var q = engine.GetReports(predicate, reportsDataContext); reports = q.ToList(); This is my query construction at the moment: public override IQueryable<Report> GetReports(Expression<Func<ReportData, bool>> predicate, LLReportsDataContext reportDC) { if (reportDC == null) throw new ArgumentNullException("reportDC"); var q = reportDC.ReportDatas.Where(predicate).Where(r => r.ServiceID.Equals(1)).Select(r => r.Report); return q; }

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • Have trouble when using heredoc syntax in PHP

    - by wamp
    <?php $output = <<< END <table style="display: table;" class="listview rowstyle-rowhighlight" id="resourcegrid"> <thead> <tr> <th width="70"></th> <th style="-moz-user-select: none;" class="sortable fd-column-0"><a class="fdTableSortTrigger" href="#">Name</a></th> <th style="-moz-user-select: none;" class="sortable fd-column-1"><a class="fdTableSortTrigger" href="#">Contributor</a></th> <th style="-moz-user-select: none;" class="sortable fd-column-3"><a class="fdTableSortTrigger" href="#">Modified</a></th> </tr> </thead><tbody> END; echo $output; When I run it reports : Parse error: parse error on line 2 But I don't see anything abnormal.

    Read the article

  • Does the "Supporting Multiple Screens" document contradict itself?

    - by Neil Traft
    In the Supporting Multiple Screens document in the Android Dev Guide, some example screen configurations are given. One of them states that the small-ldpi designation is given to QVGA (240x320) screens with a physical size of 2.6"-3.0". According to this DPI calculator, a 2.8" QVGA display equates to 143 dpi. However, further down the page the document explicitly states that all screens over 140 dpi are considered "medium" density. So which is it, ldpi or mdpi? Is this a mistake? Does anyone know what the HTC Tattoo or similar device actually reports? I don't have access to any devices like this. Also, with the recent publishing of this document, I'm glad to see we finally have an explicit statement of the exact DPI ranges of the three density categories. But why haven't we been given the same for the small, medium, and large screen size categories? I'd like to know the exact ranges for all these. Thanks in advance for your help!

    Read the article

  • How to know why an animation stutters?

    - by Patrick Klug
    I have a few fairly simple animations (moving text around, moving ellipses etc.) and running in full screen (1920x1080 minus the task bar) the WPF Performance Suite reports a good framerate around 50 FPS throughout the animation. Dirty Rect Addition is somewhere around 300 rect/s, the SW frames are between 0 and 4 and the HW frames are between 3 and 5. Video memory usage is around 80 MB. Problem is that the animations stutters every other half second. My machine is a new Dell laptop XPS 15 with the GeForce GT 435 with 2GB memory. - The drivers are up to date. (The same behavior occurs on my netbook (in full screen) as well so I don't think it is hardware related.) If I make the window smaller the stutter goes away. The stutter occurs with the simplest of animations - even with just a couple of elements but adding more elements certainly makes it more noticeable. How can I find out what causes this stutter? When I think of it, I have not actually seen any WPF animations which run smoothly in full screen. Is this even possible?

    Read the article

  • Sql Server Compact Edition version error.

    - by Tim
    I am working on .NET ClickOnce project that uses Sql Server 2005 Compact Edition to synchronize remote data through the use of a Merge replication. This application has been live for nearly a year now, and while we encounter occasional synchronization errors, things run quite smoothly for the most part. Yesterday a user reported an error that I have never seen before and have yet to find any information for online. Many users synchronize every night, and I haven't received error reports from anyone else, so this issue must be isolated to this particular user / client machine. Here are the full details of the error: -Error Code : 80004005 -Message : The message contains an unexpected replication operation code. The version of SQL Server Compact Edition Client Agent and SQL Server Compact Edition Server Agent should match. [ replication operation code = 31 ] -Minor Error : 28526 -Source : Microsoft SQL Server Compact Edition -Numeric Parameters : 31 One interesting thing that I've found is that his data does get synchronized to the server, so this error must occur after the upload completes. I have yet to determine whether or not changes at the server are still being downloaded to his subscription. Thinking that maybe there was some kind of version conflict going on, I had a remote desktop session with this user last night and uninstalled both the application and the SQL Server Compact Edition prerequisite, then reinstalled both from our ClickOnce publication site. I also removed his existing local database file so that upon synchronization, an entirely new subscription would be issued to him. Still his errors continue. I suppose the error may be somewhat general, and the text in the error message stating that the versions should match may not necessarily reflect the problem at hand. This site contains the only official reference to this error that I've been able to find, and it offers no more detail than the error message itself. Has anyone else encountered this error? Or at least know more about SQL Compact to have a better guess as to what is going on here? Any help / suggestions will be greatly appreciated!

    Read the article

  • [Cocoa] Placing an NSTimer in a separate thread

    - by ndg
    I'm trying to setup an NSTimer in a separate thread so that it continues to fire when users interact with the UI of my application. This seems to work, but Leaks reports a number of issues - and I believe I've narrowed it down to my timer code. Currently what's happening is that updateTimer tries to access an NSArrayController (timersController) which is bound to an NSTableView in my applications interface. From there, I grab the first selected row and alter its timeSpent column. From reading around, I believe what I should be trying to do is execute the updateTimer function on the main thread, rather than in my timers secondary thread. I'm posting here in the hopes that someone with more experience can tell me if that's the only thing I'm doing wrong. Having read Apple's documentation on Threading, I've found it an overwhelmingly large subject area. NSThread *timerThread = [[[NSThread alloc] initWithTarget:self selector:@selector(startTimerThread) object:nil] autorelease]; [timerThread start]; -(void)startTimerThread { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; activeTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(updateTimer:) userInfo:nil repeats:YES] retain]; [runLoop run]; [pool release]; } -(void)updateTimer:(NSTimer *)timer { NSArray *selectedTimers = [timersController selectedObjects]; id selectedTimer = [selectedTimers objectAtIndex:0]; NSNumber *currentTimeSpent = [selectedTimer timeSpent]; [selectedTimer setValue:[NSNumber numberWithInt:[currentTimeSpent intValue]+1] forKey:@"timeSpent"]; } -(void)stopTimer { [activeTimer invalidate]; [activeTimer release]; }

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • How do I get into a career as a programmer/development DBA?

    - by markle976
    About 8-9 years ago I started getting into programming as a hobby. I started with my TI-86 calculator, and then moved into using Visual Basic. After about a year I started playing around with HTML and JavaScript. Then I discovered Flash; I programmed with Actionscript 2.0 for about 2 years which lead me to start using Coldfusion. After a while I realized that A) I am not a designer, and B) with the way that things were going with AJAX, .NET, and PHP there wasn’t much future in Coldfusion/Actionscript. I had been working mostly as an administrative assistant, but about 3-4 years ago I got a position where I would be doing some web development, and assisting the system admin with supporting windows desktop PCs. I have gotten some decent experience over the past few years, but it has been spread out in somewhat disparate areas: I spend about 40% of my time writing PHP/MySQL and HTML/CSS, etc. I spend about 20% of my time helping users with PC questions. I spend about 20% of my time doing administrative things (mail-merges, excel, etc). I spend about 20% of my time managing / creating reports from our Access Database. I have also taught myself many things on my own, and now have a beginner’s level understanding of things like: Windows Server, Java, Linux, Objective-C, SQL Server, C#, C++, Ruby, Mac OSX, VBA, VBScript, and basic IP networks. I feel like I am in a bit of a rut – I want to get my career moving, but I am not sure what I need to do. If I practice with C# and SQL Server Express for a year will that be enough to get me in the door somewhere? Would it be easier to get a position if I teach myself Linux/Apache since I have more experience with PHP/MySQL?

    Read the article

  • Silverlight 4 ComboBox - Binding to Nullable data (tried TargetNullValue but not working as expected)

    - by Laurence
    (Please note - I am a Silverlight beginner and am looking for the simplest solution here, e.g. that doesn't involve writing/installing a replacement for the ComboBox control!) This is an issue with a Silverlight 4 application that uses the View Model (MVVM) approach. I have a simple form for editing a "Product" object. Product has a CategoryID property which is nullable (int?). A ComboBox is used to view and set the CategoryID - this is bound to an ObservableCollection of Categories. Product also has number of non-nullable properties bound to TextBoxes. I want the user to see "N/A" in the ComboBox for a product with no category, and to be use this "N/A" option to set CategoryID to null. So, I manually added a Category object with CategoryID=0 and CategoryName="N/A" to the collection; then I set TargetNullValue=0 in the SelectedValue Binding of the ComboBox. My thinking was - when the ComboBox SelectedValue was bound to a null CategoryID it would substitute zero, and therefore select the "N/A" option. When editing a Product with a non-null CategoryID, everything works. However when a null CategoryID is found, two problems occur: No option is selected in the ComboBox (its blank) The ComboBox binding seems broken from this point onwards - any Product I subsequently edit (incl. ones with a non-null CategoryID) have nothing selected in the ComboBox (its still populated with all categories - just no selected item). I've seen reports of problem #2 (here, here) but I was under the impression that #1 should have worked. What am I missing to get the "N/A" option to be selected? XAML for ComboBox: <ComboBox x:Name="cboCategory" ItemsSource="{Binding colCategories, Mode=OneWay}" SelectedValuePath="CategoryID" DisplayMemberPath="CategoryName" SelectedValue="{Binding CurrentProduct.CategoryID, Mode=TwoWay, TargetNullValue=0}" Height="24" Width="344"></ComboBox>

    Read the article

  • What is an elegant way to set up a leiningen project that requires different dependencies based on the build platform?

    - by Savanni D'Gerinel
    In order to do some multi-platform GUI development, I have just switched from GTK + Clojure (because it looks like the Java bindings for GTK never got ported to Windows) to SWT + Clojure. So far, so good in that I have gotten an uberjar built for Linux. The catch, though, is that I want to build an uberjar for Windows and I am trying to figure out a clean way to manage the project.clj file. At first, I thought I would set the classpath to point to the SWT libraries and then build the uberjar. This would require that I set a classpath to the SWT libraries before running the jar, but I would likely need a launcher script, anyway. However, leiningen seems to ignore the classpath in this instance because it always reports that Currently, project.clj looks like this for me: (defproject alyra.mana-punk/character "1.0.0-SNAPSHOT" :description "FIXME: write" :dependencies [[org.clojure/clojure "1.2.0"] [org.clojure/clojure-contrib "1.2.0"] [org.eclipse/swt-gtk-linux-x86 "3.5.2"]] :main alyra.mana-punk.character.core) The relevant line is the org.eclipse/swt-gtk-linux-x86 line. If I want to make an uberjar for Windows, I have to depend on org.eclipse/swt-win32-win32-x86, and another one for x86-64, and so on and so forth. My current solution is to simply create a separate branch for each build environment with a different project.clj. This seems kinda like using a semi to deliver a single gallon of milk, but I am using bazaar for version control, so branching and repeated integrations are easy. Maybe the better way is to have a project.linux.clj, project.win32.clj, etc, but I do not see any way to tell leiningen which project descriptor to use. What are other (preferably more elegant) ways to set up such an environment?

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • How to create XSD schema from XML with this kind of structure (in .net)?

    - by Mr. Brownstone
    Here's the problem: my input is XML file that looks something like: <BaseEntityClassInfo> <item> <key>BaseEntityClassInfo.SomeField</key> <value>valueData1</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.SomeOtherField</key> <value>valueData2</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.AnotherClassInfo.DisplayedText</key> <value>valueData3</value> </item> ... ... </BaseEntityClassInfo> The <key> element somehow describes entity classes fields and relationships (used in some other app that I don't have access to) and the <value> stores the actual data that I need. My goal is to programatically generate a typed Dataset from this XML that could then be used for creating reports. I thought of building some XSD schema from input XML file first and then use this schema to generate Dataset but I'm not sure how to do that. The problem is that I don't want all data in one table, I need several tables with relationships based on the <key> value so I guess I need to infer relational structure from XML <key> data in some way. So what do you think? How could this be done and what would be the best approach? Any advice, ideas, suggestions would be appreciated!

    Read the article

  • The perfect web UI framework (with Microsoft stack?) - architecture question?

    - by Igorek
    I'm looking for suggestions for the following issue, and I realize there is really not going to be a perfect answer to my question: I have a UI built in WinForms.NET (v4.0 framework) with WCF back-end and EF4 model objects, that I am looking to port to the web. UI is not huge and is not super complex and is structured well. But it is not a super simple system either. I am looking to pick a technology stack for the web-frontend that will target desktop & partially mobile platforms, provide a good development platform to build on, and facilitate code reuse across UI and back-end tiers... I would rather avoid: custom coding of UI-centric scripts, because they are hard to debug, non-compiled, usually a maintenance nightmare, almost always start to contain business logic, and duplicate some of the logic that back-end tiers have (especially validation) custom-coding for Desktop Web and Mobile Web UI's separately (although I realize that mobile web UI will likely contain fewer of data-entry screens and more reporting screens) non-.NET technology stacks I would love to: target the reporting capabilities of the system toward mobile web browsers not have to write a single line of script (javascript, jquery, etc.) utilize a good collection of controls that produces an elegant UI use .NET for everything The way I see it right now, I need to re-write this app in Silverlight, utilize a 3rd party UI framework like Telerik, and re-do the reports UI again for mobile platforms separately. However, I'm rather concerned about the shelf-life of Silverlight and the needed to deploy a different architecture to deal with mobile platform. Is there an ASP.NET/MVC/Ajax architecture/framework/library that would allow me to get at the power of .NET and without painful (imho) client-side scripting, while providing a decent user experience Thank you

    Read the article

  • Issue with Callback method and maintaining CultureInfo and ASP.Net HttpRuntime

    - by Little Larry Sellers
    Hi All, Here is my issue. I am working on an E-commerce solution that is deployed to multiple European countries. We persist all exceptions within the application to SQL Server and I have found that there are records in the DB that have a DateTime in the future! We define the culture in the web.config, for example pt-PT, and the format expected is DD-MM-YYYY. After debugging I found the issue with these 'future' records in the DB is because of Callback methods we use. For example, in our Caching architecture we use Callbacks, as such - CacheItemRemovedCallback ReloadCallBack = new CacheItemRemovedCallback(OnRefreshRequest); When I check the current threads CultureInfo, on these Callbacks it is en-US instead of pt-PT and also the HttpContext is null. If an exception occurs on the Callback our exception manager reports it as MM-DD-YYYY and thus it is persisted to SQL Server incorrectly. Unfortunately, in the exception manager code, we use DateTime.Now, which is fine if it is not a callback. I can't change this code to be culture specific due to it being shared across other verticals. So, why don't callbacks into ASP.Net maintain context? Is there any way to maintain it on this callback thread? What are the best practices here? Thanks.

    Read the article

  • getting jpeg error

    - by bhaskaragr29
    <?php function LoadPNG() { /* Attempt to open */ //require_once 'resizex.php'; $imgname="/home2/puneetbh/public_html/prideofhome/wp-content/uploads/268995481image_11.png"; //$im = @imagecreatefrompng($imgname); $img= imagecreatefromstring(file_get_contents($imgname)); //$im=imagecreatefrompng('images/frame.png'); $im= imagecreatefromjpeg('images/frame.jpeg'); //imagealphablending($img, false); //imagesavealpha($img, true); //$img=resizex("$url",60,65,1); imagecopymerge($im,$img,105,93,0, 0,275,258,100); /* See if it failed */ if(!$im) { /* Create a blank image */ $im = imagecreatetruecolor(150, 30); $bgc = imagecolorallocate($im, 255, 255, 255); $tc = imagecolorallocate($im, 0, 0, 0); imagefilledrectangle($im, 0, 0, 150, 30, $bgc); /* Output an error message */ imagestring($im, 1, 5, 5, 'Error loading ' . $imgname, $tc); } return $im; } $img = LoadPNG(); header('Content-type: image/jpeg'); imagejpeg($im); imagedestroy($im); imagedestroy($img); ?> i am getting error arning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: gd-jpeg: JPEG library reports unrecoverable error: in /home2/puneetbh/public_html/prideapp/frame.php on line 11 Warning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: 'images/frame.jpeg' is not a valid JPEG file in /home2/puneetbh/public_html/prideapp/frame.php on line 11 Warning: imagecopymerge(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 16 Warning: Cannot modify header information - headers already sent by (output started at /home2/puneetbh/public_html/prideapp/frame.php:11) in /home2/puneetbh/public_html/prideapp/frame.php on line 34 Warning: imagejpeg(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 35 Warning: imagedestroy(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 36

    Read the article

  • Forces to prompt download box IE

    - by Bruno Costa
    Hello, I'm having a problem with some reports in the application I'm doing manutention I've a button that does a postback to the server and do some information and then get back to the cliente and open a popup to download the report. private void grid_ItemCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { ... ClientScript.RegisterClientScriptBlock(this.GetType(), "xxx", "<script>javascript:window.location('xx.aspx?m=x','xxx','width=750,height=350,directories=no,location=no,menubar=no,scrollbars,status=no,toolbar=no,resizable=yes,left=50,top=50');</script>"); } Then in xxx.aspx I've the code: Response.ClearContent(); Response.ClearHeaders(); Response.TransmitFile(tempFileName); Response.Flush(); Response.Close(); File.Delete(tempFileName); Response.End(); This works fine if IE option Automatic prompting for file downloads is enabled. But by default this is disabled and I need to force the download box to be prompting. Can I do anything without change a lot of code? Thanks.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • Tfs 2010: how to set up a corporate source server?

    - by bwerks
    Hi all, I'm looking for guidance in setting up a corporate source server, but when I google this topic the best I can come up with is articles and walkthrough concerned with configuring VS to use microsoft's public symbol servers for use with debugging .NET assemblies. Provided for background info, the environment I'm concerned with using is Vs2010/Tfs2010. Basically, the workflow I'm looking to facilitate is this: 1) customer reports problem with application 2) application of the appropriate version is installed on a virtual machine 3) developer repros bug attaching to process on virtual machine and leveraging source server (symbol server?) on corporate domain. This is the step I'm concerned with. 4) developer pinpoints problem fixes bug in workspace. 5) developer performs a dll swap on VM to test changes? (side topic, not sure on this) 6) normal development/source control workflows. Any advice is welcome! Edit: since writing this, I have stumbled on this article, which is a nice writeup on the configuration of source server for TFS 2008. Has anyone adapted this for Tfs 2010?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >