Search Results

Search found 6511 results on 261 pages for 'everyday usage'.

Page 236/261 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • .NET RegEx "Memory Leak" investigation

    - by Kevin Pullin
    I recently looked into some .NET "memory leaks" (i.e. unexpected, lingering GC rooted objects) in a WinForms app. After loading and then closing a huge report, the memory usage did not drop as expected even after a couple of gen2 collections. Assuming that the reporting control was being kept alive by a stray event handler I cracked open WinDbg to see what was happening... Using WinDbg, the !dumpheap -stat command reported a large amount of memory was consumed by string instances. Further refining this down with the !dumpheap -type System.String command I found the culprit, a 90MB string used for the report, at address 03be7930. The last step was to invoke !gcroot 03be7930 to see which object(s) were keeping it alive. My expectations were incorrect - it was not an unhooked event handler hanging onto the reporting control (and report string), but instead it was held on by a System.Text.RegularExpressions.RegexInterpreter instance, which itself is a descendant of a System.Text.RegularExpressions.CachedCodeEntry. Now, the caching of Regexs is (somewhat) common knowledge as this helps to reduce the overhead of having to recompile the Regex each time it is used. But what then does this have to do with keeping my string alive? Based on analysis using Reflector, it turns out that the input string is stored in the RegexInterpreter whenever a Regex method is called. The RegexInterpreter holds onto this string reference until a new string is fed into it by a subsequent Regex method invocation. I'd expect similar behaviour by hanging onto Regex.Match instances and perhaps others. The chain is something like this: Regex.Split, Regex.Match, Regex.Replace, etc Regex.Run RegexScanner.Scan (RegexScanner is the base class, RegexInterpreter is the subclass described above). The offending Regex is only used for reporting, rarely used, and therefore unlikely to be used again to clear out the existing report string. And even if the Regex was used at a later point, it would probably be processing another large report. This is a relatively significant problem and just plain feels dirty. All that said, I found a few options on how to resolve, or at least work around, this scenario. I'll let the community respond first and if no takers come forward I will fill in any gaps in a day or two.

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • spring jboss ehcache

    - by boyd4715
    I am trying to configure my application to make use of ehCache. I am using Spring 2.5.6, Jboss 5.1.0 GA and its embedded version of Hibernate along with ehCache-core V2.3.1. I have done the following configuration: <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.jdbc.batch_size">20</prop> <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop> <prop key="net.sf.ehcache.configurationResourceName">ehcache.xml</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.generate_statistics">true</prop> <!-- prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</prop> <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop--> </props> </property> This is my ehcache.xml <defaultCache eternal="false" overflowToDisk="false" maxElementsInMemory="50000" timeToIdleSeconds="30" timeToLiveSeconds="6000" memoryStoreEvictionPolicy="LRU" /> <cache name="com.model.SystemProperty" maxElementsInMemory="5000" eternal="true" overflowToDisk="false" memoryStoreEvictionPolicy="LFU" /> This file is located in my class path. I have added the following to my domain object: @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region="vsg.ecotrak.admin.store.domain.Store", include="non-lazy") When I start up the server, it gets stuck. Here is the output: 13:17:09,000 INFO [SettingsFactory] Second-level cache: enabled 13:17:09,000 INFO [SettingsFactory] Query cache: enabled 13:17:09,016 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 13:17:09,017 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider Any Ideal as to why this is happening? I am running on a Windows 7 64 bit if that matters. I downgraded the ehcache jar to V 1.2.3 and the server now starts.

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • WebBrowser Control in ATL window. How to free up memory on window unload? I'm stuck.

    - by Martin
    Hello there. I have a Win32 C++ Application. There is the _tWinMain(...) Method with GetMessage(...) in a while loop at the end. Before GetMessage(...) I create the main window with HWND m_MainHwnd = CreateWindowExW(WS_EX_TOOLWINDOW | WS_EX_LAYERED, CAxWindow::GetWndClassName(), _TEXT("http://www.-website-.com"), WS_POPUP, 0, 0, 1024, 768, NULL, NULL, m_Instance, NULL); ShowWindow(m_MainHwnd) If I do not create the main window, my application needs about 150K in memory. But with creating the main window with the WebBrowser Control inside, the memory usage increases to 8500K. But, I want to dynamically unload the main window. My _tWinMain(...) keeps running! Im unloading with DestroyWindow(m_MainHwnd) But the WebBrowser control won't unload and free up it's memory used! Application memory used is still 8500K! I can also get the WebBrowser Instance or with some additional code the WebBrowser HWND IWebBrowser2* m_pWebBrowser2; CAxWindow wnd = (CAxWindow)m_MainHwnd; HRESULT hRet = wnd.QueryControl(IID_IWebBrowser2, (void**)&m_pWebBrowser2); So I tried to free up the memory used by main window and WebBrowser control with (let's say it's experimental): if(m_pWebBrowser2) m_pWebBrowser2->Release(); DestroyWindow(m_hwndWebBrowser); //<-- just analogous OleUninitialize(); No success at all. I also created a wrapper class which creates the main window. I created a pointer and freed it up with delete: Wrapper* wrapper = new Wrapper(); //wrapper creates main window inside and shows it //...do some stuff delete(wrapper); No success. Still 8500K. So please, how can I get rid of the main window and it's WebBrowser control and free up the memory, returning to about 150K. Later I will recreate the window. It's a dynamically load and unload of the main window, depending on other commands. Thanks! Regards Martin

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Render a view as a string

    - by Dan Atkinson
    Hi all! I'm wanting to output two different views (one as a string that will be sent as an email), and the other the page displayed to a user. Is this possible in ASP.NET MVC beta? I've tried multiple examples: RenderPartial to String in ASP.NET MVC Beta If I use this example, I receive the "Cannot redirect after HTTP headers have been sent.". MVC Framework: Capturing the output of a view If I use this, I seem to be unable to do a redirectToAction, as it tries to render a view that may not exist. If I do return the view, it is completely messed up and doesn't look right at all. Does anyone have any ideas/solutions to these issues i have, or have any suggestions for better ones? Many thanks! Below is an example. What I'm trying to do is create the GetViewForEmail method: public ActionResult OrderResult(string ref) { //Get the order Order order = OrderService.GetOrder(ref); //The email helper would do the meat and veg by getting the view as a string //Pass the control name (OrderResultEmail) and the model (order) string emailView = GetViewForEmail("OrderResultEmail", order); //Email the order out EmailHelper(order, emailView); return View("OrderResult", order); } Accepted answer from Tim Scott (changed and formatted a little by me): public virtual string RenderViewToString( ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { Stream filter = null; ViewPage viewPage = new ViewPage(); //Right, create our view viewPage.ViewContext = new ViewContext(controllerContext, new WebFormView(viewPath, masterPath), viewData, tempData); //Get the response context, flush it and get the response filter. var response = viewPage.ViewContext.HttpContext.Response; response.Flush(); var oldFilter = response.Filter; try { //Put a new filter into the response filter = new MemoryStream(); response.Filter = filter; //Now render the view into the memorystream and flush the response viewPage.ViewContext.View.Render(viewPage.ViewContext, viewPage.ViewContext.HttpContext.Response.Output); response.Flush(); //Now read the rendered view. filter.Position = 0; var reader = new StreamReader(filter, response.ContentEncoding); return reader.ReadToEnd(); } finally { //Clean up. if (filter != null) { filter.Dispose(); } //Now replace the response filter response.Filter = oldFilter; } } Example usage Assuming a call from the controller to get the order confirmation email, passing the Site.Master location. string myString = RenderViewToString(this.ControllerContext, "~/Views/Order/OrderResultEmail.aspx", "~/Views/Shared/Site.Master", this.ViewData, this.TempData);

    Read the article

  • SQL Server stored procedures - update column based on variable name..?

    - by ClarkeyBoy
    Hi, I have a data driven site with many stored procedures. What I want to eventually be able to do is to say something like: For Each @variable in sproc inputs UPDATE @TableName SET @variable.toString = @variable Next I would like it to be able to accept any number of arguments. It will basically loop through all of the inputs and update the column with the name of the variable with the value of the variable - for example column "Name" would be updated with the value of @Name. I would like to basically have one stored procedure for updating and one for creating. However to do this I will need to be able to convert the actual name of a variable, not the value, to a string. Question 1: Is it possible to do this in T-SQL, and if so how? Question 2: Are there any major drawbacks to using something like this (like performance or CPU usage)? I know if a value is not valid then it will only prevent the update involving that variable and any subsequent ones, but all the data is validated in the vb.net code anyway so will always be valid on submitting to the database, and I will ensure that only variables where the column exists are able to be submitted. Many thanks in advance, Regards, Richard Clarke Edit: I know about using SQL strings and the risk of SQL injection attacks - I studied this a bit in my dissertation a few weeks ago. Basically the website uses an object oriented architecture. There are many classes - for example Product - which have many "Attributes" (I created my own class called Attribute, which has properties such as DataField, Name and Value where DataField is used to get or update data, Name is displayed on the administration frontend when creating or updating a Product and the Value, which may be displayed on the customer frontend, is set by the administrator. DataField is the field I will be using in the "UPDATE Blah SET @Field = @Value". I know this is probably confusing but its really complicated to explain - I have a really good understanding of the entire system in my head but I cant put it into words easily. Basically the structure is set up such that no user will be able to change the value of DataField or Name, but they can change Value. I think if I were to use dynamic parameterised SQL strings there will therefore be no risk of SQL injection attacks. I mean basically loop through all the attributes so that it ends up like: UPDATE Products SET [Name] = '@Name', Description = '@Description', Display = @Display Then loop through all the attributes again and add the parameter values - this will have the same effect as using stored procedures, right?? I dont mind adding to the page load time since this is mainly going to affect the administration frontend, and will marginly affect the customer frontend.

    Read the article

  • Problem with LINQ Generated class and IEnumerable to Excel

    - by mehmet6parmak
    Hi all, I wrote a method which exports values to excel file from an IEnumerable parameter. Method worked fine for my little test class and i was happy till i test my method with a LINQ class. Method: public static void IEnumerableToExcel<T>(IEnumerable<T> data, HttpResponse Response) { Response.Clear(); Response.ContentEncoding = System.Text.Encoding.Default; Response.Charset = "windows-1254"; // set MIME type to be Excel file. Response.ContentType = "application/vnd.ms-excel;charset=windows-1254"; // add a header to response to force download (specifying filename) Response.AddHeader("Content-Disposition", "attachment;filename=\"ResultFile.xls\""); Type typeOfT = typeof(T); List<string> result = new List<string>(); FieldInfo[] fields = typeOfT.GetFields(); foreach (FieldInfo info in fields) { result.Add(info.Name); } Response.Write(String.Join("\t", result.ToArray()) + "\n"); foreach (T t in data) { result.Clear(); foreach (FieldInfo f in fields) result.Add((typeOfT).GetField(f.Name).GetValue(t).ToString()); Response.Write(String.Join("\t", result.ToArray()) + "\n"); } Response.End(); } My Little Test Class: public class Mehmet { public string FirstName; public string LastName; } Two Usage: Success: Mehmet newMehmet = new Mehmet(); newMehmet.FirstName = "Mehmet"; newMehmet.LastName = "Altiparmak"; List<Mehmet> list = new List<Mehmet>(); list.Add(newMehmet); DeveloperUtility.Export.IEnumerableToExcel<Mehmet>(list, Response); Fail: ExtranetDataContext db = new ExtranetDataContext(); DeveloperUtility.Export.IEnumerableToExcel<User>(db.Users, Response); Fail Reason: When the code reaches the code block used to get the fields of the template(T), it can not find any field for the LINQ Generated User class. For my weird class it works fine. Why? Thanks...

    Read the article

  • CSS selectors : should I make my CSS easier to read or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It make my .css file harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • Is this slow WPF TextBlock performance expected?

    - by Ben Schoepke
    Hi, I am doing some benchmarking to determine if I can use WPF for a new product. However, early performance results are disappointing. I made a quick app that uses data binding to display a bunch of random text inside of a list box every 100 ms and it was eating up ~15% CPU. So I made another quick app that skipped the data binding/data template scheme and does nothing but update 10 TextBlocks that are inside of a ListBox every 100 ms (the actual product wouldn't require 100 ms updates, more like 500 ms max, but this is a stress test). I'm still seeing ~10-15% CPU usage. Why is this so high? Is it because of all the garbage strings? Here's the XAML: <Grid> <ListBox x:Name="numericsListBox"> <ListBox.Resources> <Style TargetType="TextBlock"> <Setter Property="FontSize" Value="48"/> <Setter Property="Width" Value="300"/> </Style> </ListBox.Resources> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> </ListBox> </Grid> Here's the code behind: public partial class Window1 : Window { private int _count = 0; public Window1() { InitializeComponent(); } private void OnLoad(object sender, RoutedEventArgs e) { var t = new DispatcherTimer(TimeSpan.FromSeconds(0.1), DispatcherPriority.Normal, UpdateNumerics, Dispatcher); t.Start(); } private void UpdateNumerics(object sender, EventArgs e) { ++_count; foreach (object textBlock in numericsListBox.Items) { var t = textBlock as TextBlock; if (t != null) t.Text = _count.ToString(); } } } Any ideas for a better way to quickly render text? My computer: XP SP3, 2.26 GHz Core 2 Duo, 4 GB RAM, Intel 4500 HD integrated graphics. And that is an order of magnitude beefier than the hardware I'd need to develop for in the real product.

    Read the article

  • XML Parsing Error - C#

    - by Indebi
    My code is having an XML parsing error at line 7 position 32 and I'm not really sure why Exact Error Dump 5/1/2010 10:21:42 AM System.Xml.XmlException: An error occurred while parsing EntityName. Line 7, position 32. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, Int32 lineNo, Int32 linePos) at System.Xml.XmlTextReaderImpl.HandleEntityReference(Boolean isInAttributeValue, EntityExpandType expandType, Int32& charRefEndPos) at System.Xml.XmlTextReaderImpl.ParseAttributeValueSlow(Int32 curPos, Char quoteChar, NodeData attr) at System.Xml.XmlTextReaderImpl.ParseAttributes() at System.Xml.XmlTextReaderImpl.ParseElement() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace) at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc) at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) at System.Xml.XmlDocument.Load(XmlReader reader) at System.Xml.XmlDocument.Load(String filename) at Lookoa.LoadingPackages.LoadingPackages_Load(Object sender, EventArgs e) in C:\Users\Administrator\Documents\Visual Studio 2010\Projects\Lookoa\Lookoa\LoadingPackages.cs:line 30 Xml File, please note this is just a sample because I want the program to work before I begin to fill this repository <repo> <Packages> <TheFirstPackage id="00001" longname="Mozilla Firefox" appver="3.6.3" pkgver="0.01" description="Mozilla Firefox is a free and open source web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. A Net Applications statistic put Firefox at 24.52% of the recorded usage share of web browsers as of March 2010[update], making it the second most popular browser in terms of current use worldwide after Microsoft's Internet Explorer." cat="WWW" rlsdate="4/8/10" pkgloc="http://google.com"/> </Packages> <categories> <WWW longname="World Wide Web" description="Software that's focus is communication or primarily uses the web for any particular reason."> </WWW> <Fun longname="Entertainment & Other" description="Music Players, Video Players, Games, or anything that doesn't fit in any of the other categories."> </Fun> <Work longname="Productivity" description="Application's commonly used for occupational needs or, stuff you work on"> </Work> <Advanced longname="System & Security" description="Applications that protect the computer from malware, clean the computer, and other utilities."> </Advanced> </categories> </repo> Small part of C# Code //Loading the Package and Category lists //The info from them is gonna populate the listboxes for Category and Packages Repository.Load("repo.info"); XmlNodeList Categories = Repository.GetElementsByTagName("categories"); foreach (XmlNode Category in Categories) { CategoryNumber++; CategoryNames[CategoryNumber] = Category.Name; MessageBox.Show(CategoryNames[CategoryNumber]); } The Messagebox.Show() is just to make sure it's getting the correct results

    Read the article

  • Help with Linq Expression - INotifyPropertyChanged

    - by Stephen Patten
    Hello, I'm reading the source code from the latest Prism 4 drop and am interested in solving this problem. There is a base class for the ViewModels that implements INotifyPropertyChanged and INotifyDataErrorInfo and provides some refactoring friendly change notification. protected void RaisePropertyChanged<T>(Expression<Func<T>> propertyExpresssion) { var propertyName = ExtractPropertyName(propertyExpresssion); this.RaisePropertyChanged(propertyName); } private string ExtractPropertyName<T>(Expression<Func<T>> propertyExpresssion) { if (propertyExpresssion == null) { throw new ArgumentNullException("propertyExpression"); } var memberExpression = propertyExpresssion.Body as MemberExpression; if (memberExpression == null) { throw new ArgumentException("The expression is not a member access expression.", "propertyExpression"); } var property = memberExpression.Member as PropertyInfo; if (property == null) { throw new ArgumentException("The member access expression does not access property.","propertyExpression"); } if (!property.DeclaringType.IsAssignableFrom(this.GetType())) { throw new ArgumentException("The referenced property belongs to a different type.", "propertyExpression"); } var getMethod = property.GetGetMethod(true); if (getMethod == null) { // this shouldn't happen - the expression would reject the property before reaching this far throw new ArgumentException("The referenced property does not have a get method.", "propertyExpression"); } if (getMethod.IsStatic) { throw new ArgumentException("The referenced property is a static property.", "propertyExpression"); } return memberExpression.Member.Name; } and as an example of it's usage private void RetrieveNewQuestionnaire() { this.Questions.Clear(); var template = this.questionnaireService.GetQuestionnaireTemplate(); this.questionnaire = new Questionnaire(template); foreach (var question in this.questionnaire.Questions) { this.Questions.Add(this.CreateQuestionViewModel(question)); } this.RaisePropertyChanged(() => this.Name); this.RaisePropertyChanged(() => this.UnansweredQuestions); this.RaisePropertyChanged(() => this.TotalQuestions); this.RaisePropertyChanged(() => this.CanSubmit); } My question is this. What would it take to pass an array of the property names to an overloaded method (RaisePropertyChanged) and condense this last bit of code from 4 lines to 1? Thank you, Stephen

    Read the article

  • PNGException "crc corruption" when attempting to create ImageIcon objects from ZIP archive

    - by Nathan Strong
    I've got a ZIP file containing a number of PNG images that I am trying to load into my Java application as ImageIcon resources directly from the archive. Here's my code: import java.io.*; import java.util.Enumeration; import java.util.zip.*; import javax.swing.ImageIcon; public class Test { public static void main( String[] args ) { if( args.length == 0 ) { System.out.println("usage: java Test.java file.zip"); return; } File archive = new File( args[0] ); if( !archive.exists() || !archive.canRead() ) { System.err.printf("Unable to find/access %s.\n", archive); return; } try { ZipFile zip = new ZipFile(archive); Enumeration <? extends ZipEntry>e = zip.entries(); while( e.hasMoreElements() ) { ZipEntry entry = (ZipEntry) e.nextElement(); int size = (int) entry.getSize(); int count = (size % 1024 == 0) ? size / 1024 : (size / 1024)+1; int offset = 0; int nread, toRead; byte[] buffer = new byte[size]; for( int i = 0; i < count; i++ ) { offset = 1024*i; toRead = (size-offset > 1024) ? 1024 : size-offset; nread = zip.getInputStream(entry).read(buffer, offset, toRead); } ImageIcon icon = new ImageIcon(buffer); // boom -- why? } zip.close(); } catch( IOException ex ) { System.err.println(ex.getMessage()); } } } The sizes reported by entry.getSize() match the uncompressed size of the PNG files, and I am able to read the data out of the archive without any exceptions, but the creation of the ImageIcon blows up. The stacktrace: sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) Can anyone shed some light on it? Google hasn't turned up any useful information.

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Does this mySQL Stored Procedure Work?

    - by Laxmidi
    Hi, I got the following stored procedure from http://dev.mysql.com/doc/refman/5.1/en/functions-that-test-spatial-relationships-between-geometries.html Does this work? CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End; Usage: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0,10 0,10 10,0 10))') ; SELECT myWithin(@point, @polygon) AS result ; I'm using phpMyAdmin and it blows up when using stored procedures. If this one works, then I'll try to figure out how to call it in php instead. Thanks, Laxmidi

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Error processing Spree sample images - file not recognized by identify command in paperclip geometry.rb:29

    - by purpletonic
    I'm getting an error when I run the Spree sample data. It occurs when Spree tries to load in the product data, specifically the product images. Here's the error I'm getting: * Execute db:load_file loading ruby <GEM DIR>/sample/lib/tasks/../../db/sample/spree/products.rb -- Processing image: ror_tote.jpeg rake aborted! /var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-21549-2rktq1 is not recognized by the 'identify' command. <GEM DIR>/paperclip-2.7.1/lib/paperclip/geometry.rb:31:in `from_file' <GEM DIR>/spree/core/app/models/spree/image.rb:35:in `find_dimensions' I've made sure ImageMagick is installed correctly, as previously I was having problems with it. Here's the output I'm getting when running the identify command directly. $ identify Version: ImageMagick 6.7.7-6 2012-10-06 Q16 http://www.imagemagick.org Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC Features: OpenCL ... other usage info omitted ... I also used pry with the pry-debugger and put a breakpoint in geometry.rb inside of Paperclip. Here's what that section of geometry.rb looks like: # Uses ImageMagick to determing the dimensions of a file, passed in as either a # File or path. # NOTE: (race cond) Do not reassign the 'file' variable inside this method as it is likely to be # a Tempfile object, which would be eligible for file deletion when no longer referenced. def self.from_file file file_path = file.respond_to?(:path) ? file.path : file raise(Errors::NotIdentifiedByImageMagickError.new("Cannot find the geometry of a file with a blank name")) if file_path.blank? geometry = begin silence_stream(STDERR) do binding.pry Paperclip.run("identify", "-format %wx%h :file", :file => "#{file_path}[0]") end rescue Cocaine::ExitStatusError "" rescue Cocaine::CommandNotFoundError => e raise Errors::CommandNotFoundError.new("Could not run the `identify` command. Please install ImageMagick.") end parse(geometry) || raise(Errors::NotIdentifiedByImageMagickError.new("#{file_path} is not recognized by the 'identify' command.")) end At the point of my binding.pry statement, the file_path variable is set to the following: file_path => "/var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-22732-1ctl1g1" I've also double checked that this exists, by opening my finder in this directory, and opened it with preview app; and also that the program can run identify by running %x{identify} in pry, and I receive the same version Version: ImageMagick 6.7.7-6 2012-10-06 Q16 as before. Removing the additional digits (is this a timestamp?) after the file extension and running the Paperclip.run command manually in Pry gives me a different error: Cocaine::ExitStatusError: Command 'identify -format %wx%h :file' returned 1. Expected 0 I've also tried manually updating the Paperclip gem in Spree to 3.0.2 and still get the same error. So, I'm not really sure what else to try. Is there still something incorrect with my ImageMagick setup?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >