Search Results

Search found 12829 results on 514 pages for 'hard'.

Page 440/514 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • A question about writing a background/automatic/silent downloader/installer for an app in C#.

    - by Mike Webb
    Background: I have a main application that needs to be able to go to the web and download DLL files associated with it (ones that we write, located on our server). It really needs to be able to download these DLL files to the application folder in "C:\Program Files\". In the past I have used System.Net.WebClient to download whatever files I wanted from the web. The Issue I have had a lot of trouble downloading data in the past and saving to files on a user's hard drive. I get many reports of users saying that this does not work and it is generally because of user rights issues in the program. In the cases where it was an issue with program user rights every user could go to the exact file location on the web, download it, and then save it to the right place manually. I want this to work like all the other programs I have seen download/install in this fassion (i.e. Firefox Pluign Updates, Flash Player, JAVA, Adobe Reader, etc). All of these work without a hitch. The Question Is there some code I need to use to give my downloader program special rights to the Program Files folder? Can I even do this? Is there a better class or library that I should use? Is there a different approach to downloading files I should take, such as using threads or something else to download data? Any help here is appreciated. I want to try to stay away from third-party apps/libraries if at all possible, other than Microsoft of course, due to licensing issues, but still send any suggestions my way. Again, other programs seem to have the rights issues and download capability figured out. I want this same capability.

    Read the article

  • Call method immediately after object construction in LINQ query

    - by Steffen
    I've got some objects which implement this interface: public interface IRow { void Fill(DataRow dr); } Usually when I select something out of db, I go: public IEnumerable<IRow> SelectSomeRows { DataTable table = GetTableFromDatabase(); foreach (DataRow dr in table.Rows) { IRow row = new MySQLRow(); // Disregard the MySQLRow type, it's not important row.Fill(dr); yield return row; } } Now with .Net 4, I'd like to use AsParallel, and thus LINQ. I've done some testing on it, and it speeds up things alot (IRow.Fill uses Reflection, so it's hard on the CPU) Anyway my problem is, how do I go about creating a LINQ query, which calls Fills as part of the query, so it's properly parallelized? For testing performance I created a constructor which took the DataRow as argument, however I'd really love to avoid this if somehow possible. With the constructor in place, it's obviously simple enough: public IEnumerable<IRow> SelectSomeRowsParallel { DataTable table = GetTableFromDatabase(); return from DataRow dr in table.Rows.AsParallel() select new MySQLRow(dr); } However like I said, I'd really love to be able to just stuff my Fill method into the LINQ query, and thus not need the constructor overload.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Is there any way to filter certain things in pages served by IIS?

    - by Ruslan
    Hello, This is my first time posting here so please keep that in mind... I'll try to be short and get right to defining the problem. We have an ASP.NET 2 application (eCommerce package) running on IIS (Windows Server 2003). The main site's page(s) are using plain HTTP (no SSL), but the whole checkout process and the shopping cart page is using SSL (HTTPS). Now, the problem is that the site's header is located in a template file, and inside it it has a plain HTML 'img' tag calling an image with the "http://" portion hard-coded into it... This header appears on absolutely every page (including the https pages), and due to its insecure image tag, a warning box pops up in IE on every stage of the checkout process... Now, the problem: The live application cannot be touched in any way (no changes can be made to the template (so simply changing "http://" to "//" is not an option), IIS cannot be restarted, and the website/app pool cannot be restarted). Is there any way in the world (maybe plugin for IIS or a setting somewhere) that I can filter the pages right before they are served to replace the '<img src="http://example.com/image.jpg">' with '<img src="//example.com/image.jpg">' in the final HTML? Possibly via a regular expression or something? Thanks to everybody in advance.

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • How would you mask data returned in a Dynamic Data for Entities website?

    - by David Stratton
    I'm doing this in Visual Studio 2008, not 2010, in case there is a relevant difference between the two versions of the Dynamic Data websites. How would I mask data in the automatically generated tables in a Dynamic Data for Entities website? The scenario is we have one table where we want to allow users to ENTER sensitive data, but not VIEW sensitive data, so... (In the list below, I'm using "template" to mean "The web page generated automatically based on the schema and action. I'm sure that's the wrong terminology, but the meaning should be clear.) The "Insert" template should have the field's textbox available for the user to type a value in. The "Edit" template should have the field's textbox blanked out (empty string) regardless of what was in the field in the database in the first place, but the user should be able to type in new data and have it save The "View" template should either have the data for this field masked, or non-visible. The auto-generated table showing the list of records should also have this field masked or non-visible. I can do this easily with standard Web Forms, but I'm having a hard time figuring this out in the Dynamic Data site I'm working on. Masking data is such a common task, I have to believe Microsoft thought of this and provided a way to do it...

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • In Inform 7, is it possible to use a second noun construct with "pull"?

    - by Beska
    I'll eat my hat if I get a good answer to this...I suspect that although I'm a rank beginner in Inform 7, and I'm guessing this isn't that hard, but there are probably not many people here who are familiar with Inform 7. Still, nothing ventured... I'm trying to create a custom response to a "pull" action. Unfortunately, I think the "pull" action doesn't normally expect a second noun. So I'm trying something like this: The nails are some things in the Foyer. The nails are scenery. Instead of pulling the nails: If the second noun is nothing: say "How? Are you going to pull the nails with your teeth?"; otherwise: say "I don't think that's going to do the job." But while this compiles, and the first part works, the "I don't think..." section is never called...the interpreter just responds "I only understood you as far as wanting to pull the nails." Do I have to create my own custom action for this? Overwrite the standard pull action? Am I missing something simple that will allow me to get this to work?

    Read the article

  • Old dll.config problem !

    - by user313421
    Since 2005 as I googled it's a problem for who needs to read the configuration of an assembly from it's config file "*.dll.config" and Microsoft didn't do anything yet. Story: If you try to read a setting from a class library (plug-in) you fail. Instead the main application domain (EXE which is using the plug-in) config is read and because probably there's not such a config your plug-in will use default setting which is hard-coded when you create it's settings for first time. Any change to .dll.config wouldn't see by your plug-in and you wonder why it's there! If you want to replace it and start searching you may find something like this: http://stackoverflow.com/questions/594298/c-dll-config-file But just some ideas and one line code. A good replacement for built-in config shouldn't read from file system each time we need a config value, so we can store them in memory; Then what if user changes config file ? we need a FileSystemWatcher and we need some design like singleton ... and finally we are at the same point configuration of .NET is except our one's working. It seems MS did everything but forgot why they built the ".dll.config". Since no DLL is gonna execute by itself, they are referenced from other apps (even if used in web) and so why there's such a "*.dll.config" file ? I'm not gonna argue if it's good to have multiple config files or not. It's my design (plug-able components). Finally { After these years, is there any good practice such as a custom setting class to add in each assemly and read from it's own config file ? }

    Read the article

  • Newsletter send using AJAX to avoid PHP timeout

    - by simPod
    I need to send newsletters. I have already a PHP script that sends mass emails but it won't work for long as email database is growing because of PHP max script run time. So, to avoid it I came up with a solution: I would call my PHP script using AJAX in javascript and I will give it $_GET parameter with a count 20 so the script would sent only 20 emails. Than AJAX would receive success response, and call my script again and again till all emails are send. Is it possible? I'm asking because I have never seen such a solution so I'm wondering if it is real (It's kinda hard to implement this into my PHP framework so I'm asking experts here first) To sum it up here's a code skeleton: <script> var emailCount = 1000; //would get this from DB var runCount = 20; //number of emails sent in one cycle var from = 0; //start number function sendMail(){ if(from<emailCount){ jQuery.ajaxfunction({ path: 'script.php?from='+from+'&count='+runCount successFc: function(){ from+=runCount; sendMail(); } }) } } sendMail(); </script> So, are there any obstacles? Thanks a lot.

    Read the article

  • Vertex Buffers in opengl

    - by JB
    I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction. I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc. I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient. Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?

    Read the article

  • How to pass date parameters to Crystal Reports 2008 from an ASP.NET App?

    - by Unlimited071
    Hello all, I'm passing some parameters to a CR report programatically and it was working fine, but now that I have added a new date parameter to the report, for some reason, it is prompting me to enter that parameter on screen (the user isn't allowed to set that parameter is the system who must set it.), I haven't changed a thing other than adding the new date parameter, and all the other parameters behave normal, just the date parameter is prompted even thought I've already set a value for the parameter. This is the code I've got: private void ConfigureCrystalReports() { crystalReportViewer.ReportSource = GetReportPath(); crystalReportViewer.DataBind(); ConnectionInfo connectionInfo = GetConnectionInfo(); TableLogOnInfos tableLogOnInfos = crystalReportViewer.LogOnInfo; foreach (TableLogOnInfo tableLogOnInfo in tableLogOnInfos) { tableLogOnInfo.ConnectionInfo = connectionInfo; } ArrayList totOriValues = new ArrayList(); totOriValues.Add(date.ToString("MM/dd/yyyy HH:mm:ss")); ParameterFields parameterFields = crystalReportViewer.ParameterFieldInfo; SetCurrentValuesForParameterField(parameterFields, totOriValues, "DateParameter"); } private static void SetCurrentValuesForParameterField(ParameterFields parameterFields, ArrayList arrayList, string parameterName) { ParameterValues currentParameterValues = new ParameterValues(); foreach (object submittedValue in arrayList) { ParameterDiscreteValue parameterDiscreteValue = new ParameterDiscreteValue(); parameterDiscreteValue.Value = submittedValue.ToString(); currentParameterValues.Add(parameterDiscreteValue); } ParameterField parameterField = parameterFields[parameterName]; parameterField.CurrentValues = currentParameterValues; } Just for the sake of things: I have checked that the parameter is indeed a date and that it is well formed. CR prompts me to enter it in the format (mm/dd/yyyy hh:mm:ss) so I pass date in that exact format (In fact I've even tried hard coding a well-formed date and it still prompts me to enter the date). Am I doing something wrong?

    Read the article

  • Scrolling list items in wpf

    - by sudarsanyes
    I guess the following picture depicts the problem better than texts... That is what I need. <ListBox x:Name="NamesListBox" ItemsSource="{Binding}"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <WrapPanel x:Name="ItemWrapPanel"> <WrapPanel.RenderTransform> <TranslateTransform x:Name="ItemWrapPanelTransformation" X="0" /> </WrapPanel.RenderTransform> <WrapPanel.Triggers> <EventTrigger RoutedEvent="WrapPanel.Loaded"> <BeginStoryboard> <Storyboard> <DoubleAnimation Storyboard.TargetName="ItemWrapPanelTransformation" Storyboard.TargetProperty="X" To="-1000" From="{Binding ElementName=ScrollingListItemsWindow, Path=Width}" Duration="0:0:9" RepeatBehavior="100" /> </Storyboard> </BeginStoryboard> </EventTrigger> </WrapPanel.Triggers> </WrapPanel> </ItemsPanelTemplate> </ListBox.ItemsPanel> <ListBox.ItemTemplate> <DataTemplate> <Grid> <Label Content="{Binding}" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> is what I did. But here I don't wanna hard code -1000 for the X value, rather I want to determine this based on the length of the wrap panel/number of items. Can somebody help me with this ?? Reason why I choose list box is that the number of items can increase and decrease at any time and suits the problem better. If you have got any other idea, please suggest it too. Thanks.

    Read the article

  • Pass Parameter to Subroutine in Codebehind

    - by Sanjubaba
    I'm trying to pass an ID of an activity (RefNum) to a Sub in my codebehind. I know I'm supposed to use parentheses when passing parameters to subroutines and methods, and I've tried a number of ways and keep receiving the following error: BC30203: Identifier expected. I'm hard-coding it on the front-end just to try to get it to pass [ OnDataBound="FillSectorCBList("""WK.002""")" ], but it's obviously wrong. :( Front-end: <asp:DetailsView ID="dvEditActivity" AutoGenerateRows="False" DataKeyNames="RefNum" OnDataBound="dvSectorID_DataBound" OnItemUpdated="dvEditActivity_ItemUpdated" DataSourceID="dsEditActivity" > <Fields> <asp:TemplateField> <ItemTemplate> <br /><span style="color:#0e85c1;font-weight:bold">Sector</span><br /><br /> <asp:CheckBoxList ID="cblistSector" runat="server" DataSourceID="dsGetSectorNames" DataTextField="SectorName" DataValueField="SectorID" OnDataBound="FillSectorCBList("""WK.002""")" ></asp:CheckBoxList> <%-- Datasource to populate cblistSector --%> <asp:SqlDataSource ID="dsGetSectorNames" runat="server" ConnectionString="<%$ ConnectionStrings:dbConn %>" ProviderName="<%$ ConnectionStrings:dbConn.ProviderName %>" SelectCommand="SELECT SectorID, SectorName from Sector ORDER BY SectorID"></asp:SqlDataSource> </ItemTemplate> </asp:TemplateField> </Fields> </asp:DetailsView> Code-behind: Sub FillSectorCBList(ByVal RefNum As String, ByVal sender As Object, ByVal e As System.EventArgs) Dim SectorIDs As New ListItem Dim myConnection As String = ConfigurationManager.ConnectionStrings("dbConn").ConnectionString() Dim objConn As New SqlConnection(myConnection) Dim strSQL As String = "SELECT DISTINCT A.RefNum, AS1.SectorID, S.SectorName FROM Activity A LEFT OUTER JOIN Activity_Sector AS1 ON AS1.RefNum = A.RefNum LEFT OUTER JOIN Sector S ON AS1.SectorID = S.SectorID WHERE A.RefNum = @RefNum ORDER BY A.RefNum" Dim objCommand As New SqlCommand(strSQL, objConn) objCommand.Parameters.AddWithValue("RefNum", RefNum) Dim ad As New SqlDataAdapter(objCommand) Try [Code] Finally [Code] End Try objCommand.Connection.Close() objCommand.Dispose() objConn.Close() End Sub Any advice would be great. I'm not sure if I even have the right approach. Thank you!

    Read the article

  • Transitioning from desktop app written in C++ to a web-based app

    - by Karim
    We have a mature Windows desktop application written in C++. The application's GUI sits on top of a windows DLL that does most of the work for the GUI (it's kind of the engine). It, too, is written in C++. We are considering transitioning the Windows app to be a web-based app for various reasons. What I would like to avoid is having to writing the CGI for this web-based app in C++. That is, I would rather have the power of a 4G language like Python or a .NET language for creating the web-based version of this app. So, the question is: given that I need to use a C++ DLL on the backend to do the work of the app what technology stack would you recommend for sitting between the user's browser and are C++ dll? We can assume that the web server will be Windows. Some options: Write a COM layer on top of the windows DLL which can then be access via .NET and use ASP.NET for the UI Access the export DLL interface directly from .NET and use ASP.NET for the UI. Write a custom Python library that wraps the windows DLL so that the rest of the code can be written. Write the CGI using C++ and a C++-based MVC framework like Wt Concerns: I would rather not use C++ for the web framework if it can be avoided - I think languages like Python and C# are simply more powerful and efficient in terms of development time. I'm concerned that my mixing managed and unmanaged code with one of the .NET solutions I'm asking for lots of little problems that are hard to debug (purely anecdotal evidence for that) Same is true for using a Python layer. Anything that's slightly off the beaten path like that worries me in that I don't have much evidence one way or the other if this is a viable long term solution.

    Read the article

  • Low-Hanging Fruit: Obfuscating non-critical values in JavaScript

    - by Piskvor
    I'm making an in-browser game of the type "guess what place/monument/etc. is in this satellite/aerial view", using Google Maps JS API v3. However, I need to protect against cheaters - you have to pass a google.maps.LatLng and a zoom level to the map constructor, which means a cheating user only needs to view source to get to this data. I am already unsetting every value I possibly can without breaking the map (such as center and the manipulation functions like setZoom()), and initializing the map in an anonymous function (so the object is not visible in global namespace). Now, this is of course in-browser, client-side, untrusted JavaScript; I've read much of the obfuscation tag and I'm not trying to make the script bullet-proof (it's just a game, after all). I only need to make the obfuscation reasonably hard against the 1337 Java5kryp7 haxz0rz - "kid sister encryption", as Bruce Schneier puts it. Anything harder than base64 encoding would deter most cheaters by eliminating the lowest-hanging fruit - if the cheater is smart and determined enough to use a JS debugger, he can bypass anything I can do (as I need to pass the value to Google Maps API in plaintext), but that's unlikely to happen on a mass scale (there will also be other, not-code-related ways to prevent cheating). I've tried various minimizers and obfuscators, but those will mostly deal with code - the values are still shown verbatim. TL;DR: I need to obfuscate three values in JavaScript. I'm not looking for bullet-proof armor, just a sneeze-guard. What should I use?

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • vectorizing loops in Matlab - performance issues

    - by Gacek
    This question is related to these two: http://stackoverflow.com/questions/2867901/introduction-to-vectorizing-in-matlab-any-good-tutorials http://stackoverflow.com/questions/2561617/filter-that-uses-elements-from-two-arrays-at-the-same-time Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time. I've rewritten this: function B = bfltGray(A,w,sigma_r) dim = size(A); B = zeros(dim); for i = 1:dim(1) for j = 1:dim(2) % Extract local region. iMin = max(i-w,1); iMax = min(i+w,dim(1)); jMin = max(j-w,1); jMax = min(j+w,dim(2)); I = A(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2); B(i,j) = sum(F(:).*I(:))/sum(F(:)); end end into this: function B = rngVect(A, w, sigma) W = 2*w+1; I = padarray(A, [w,w],'symmetric'); I = im2col(I, [W,W]); H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2); B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []); But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems. I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer. I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available). So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.

    Read the article

  • Should we deploy a Webkit browser for our intranet applications?

    - by Jeff Meatball Yang
    At my place of employment, we are increasingly finding it difficult to develop for IE, which was historically the easiest browser to target, from an intranet-app point of view. It was already deployed. It already understood NTLM authentication, thus well integrated with our domain-level security. It had neat, albeit non-standard features such as XMLDOM and XmlHTTP. Now, we are increasingly irritated by issues presented by IE: There are several versions: IE 7, 8, and soon 9 beta, which all have slightly different issues related to performance, functionality (especially re:security and zones), and aesthetics. IE 7 and 8 are slower than Webkit-based browsers. Period. There are technology limitations such as missing canvas element, CSS bugs, etc. that make it hard to use 3rd party packages or even consistently write code across IE versions. Users are increasingly using Firefox or Chrome, even for intranet use. Does anyone have experience with making a transition? Any advice would be welcome.

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • Setting Class-Level Variable to Use Between Event Handlers

    - by lush
    I'm having a hard time understanding why the following code doesn't work. I'm sure it's something remedial that I'm missing or not understanding. I currently have a page that asks for user input. If, based on the input and logged in user, I find data from this page already in the database, I need to update the existing records rather than creating new ones, so I set a class-level bool to true. The problem is, when MyNextButton is clicked, PreviouslySubmitted is still false. So, I'm not sure how to make the value of this variable persist. Any advice is appreciated, thanks. public partial class MyForm : System.Web.UI.Page { private bool previouslySubmitted; protected void Page_Load(object sender, EventArgs e) { MyButton.Click += (o, i) => { q = from a in db.TableA where (a.SomeField == SomeValue) select a; if(q.Any()) { PreviouslySubmitted = true; //populate the form's fields with values from database for user to revise } } MyNextButton.Click += (o, i) => { if(PreviouslySubmitted) { //update database } else { //insert into database } }

    Read the article

  • Swimlane Diagram Softwares with Expand/Collapse Features

    - by louis xie
    I've been searching real hard for a software which can fulfill my needs, but to no avail. I have a swimlane diagram which is extremely huge, and almost impossible to model using Visio or any traditional swimlane software. I would need to model both the operational process, as well as the interactions within an application and between different applications. Therefore, without wasting additional effort modelling these separately, I am looking for a solution which I can combine both views together. That is, possibly one which I can expand/collapse/group/ungroup processes/subprocesses together. Take a typical credit card process for instance, a hypothetical description of the swimlane could be as such: Customer submits application form to the bank Bank Officer A receives the application form and validates that it was correctly filled Bank Officer A submits application form to Bank Officer B for processing. Bank Officer B checks credit quality of the customer through Application X. Application X submits query to Application Y to retrieve Credit Report. Application X retrieves credit report and submits to Application Z for computation of credit scores Bank Officer B validates that customer is credit worthy, and submits application to Bank Officer C for processing. The above is an over-simplified credit card request process, and a purely hypothetical one. What I'm trying to drive at is, each of the above processes have sub-processes, and I want to be able to switch between a "detailed" view and "aggregated" view. If possible, add in time dependency of the different tasks, as well. I haven't been able to find one such software which could do this.

    Read the article

  • Rails Tableless Model

    - by mplacona
    I'm creating a tableless Rails model, and am a bit stuck on how I should use it. Basically I'm trying to create a little application using Feedzirra that scans a RSS feed every X seconds, and then sends me an email with only the updates. I'm actually trying to use it as an activerecord model, and although I can get it to work, it doesn't seem to "hold" data as expected. As an example, I have an initializer method that parses the feed for the first time. On the next requests, I would like to simply call the get_updates method, which according to feedzirra, is the existing object (created during the initialize) that gets updated with only the differences. I'm finding it really hard to understand how this all works, as the object created on the initialize method doesn't seem to persist across all the methods on the model. My code looks something like: def initialize feed parse here end def get_updates feedzirra update passing the feed object here end Not sure if this is the right way of doing it, but it all seems a bit confusing and not very clear. I could be over or under-doing here, but I'd like your opinion about this approach. Thanks in advance

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >