Search Results

Search found 3240 results on 130 pages for 'groupwise maximum'.

Page 99/130 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Moving from ClearCase to Mercurial: your top tips?

    - by Robusto
    We will soon start replacing ClearCase with Mercurial. I hear this is a good thing. The change model vs. the version model. Wave of the future. I'm prepared to believe this. Still, it kind of frightens me. Hey, it took Joel Spolsky a while to grok the difference and how to get maximum advantage out of Mercurial, so I'm betting I will run into conceptual traps and pitfalls. Does anyone have any real-world "how to grok Mercurial" tips? Anything specific suggestions that will help me bridge the conceptual gap. Any warnings about things not to do? I'd appreciate hearing them. I've already read the closest questions on SO related to this topic, as well as the Mercurial tour and a number of other blogs. I'm mainly interested in any gotchas or uh-ohs I may encounter. Any wisdom you can impart will be appreciated.

    Read the article

  • The remote server returned an unexpected response: (413) Request Entity Too Large

    - by user1583591
    If anyone can help me figure out why I am getting the following error when making a call to my WCF service I would be eternally grateful. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. I have tried modifying the config file on both the service and client, and made sure the service name includes the namespace. I cannt seem to make any progress. Here is my service config settings: <services> <service name="CCC.CA-CP &amp; Sightlines Campus Carbon Calculator"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Binding2" contract="CCC.ICCCService" behaviorConfiguration="WebBehavior2" /> </service> </services> <bindings> <basicHttpBinding> <binding name="Binding2" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="52428800" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="16384" maxBytesPerRead="20000" maxNameTableCharCount="16384" ></readerQuotas> </binding> </basicHttpBinding> </bindings> .. <dataContractSerializer maxItemsInObjectGraph="12097151" /> ... <requestLimits maxAllowedContentLength="157286400" /> ... <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647"... I also set the client config with the same binding values. Here is the service contract : namespace CCC { [ServiceContract(Name = "CA-CP & Sightlines Campus Carbon Calculator", Namespace = "http://www.sightlines.com/CCC/01")] public interface ICCCService { .... } Thanks in advance for any help given!

    Read the article

  • How to use Excel VBA to extract Memo field from Access Database?

    - by the.jxc
    I have an Excel spreadsheet. I am connecting to an Access database via ODBC. Something along then lines of: Set dbEng = CreateObject("DAO.DBEngine.40") Set oWspc = dbEng.CreateWorkspace("ODBCWspc", "", "", dbUseODBC) Set oConn = oWspc.OpenConnection("Connection", , True, "ODBC;DSN=CLIENTDB;") Then I use a query and fetch a result set to get some table data. Set oQuery = oConn.CreateQueryDef("tmpQuery") oQuery.Sql = "SELECT idField, memoField FROM myTable" Set oRs = oQuery.OpenRecordset The problem now arises. My field is a dbMemo because the maximum content length is up to a few hundred chars. It's not that long, and in fact the value I'm reading is only a dozen characters. But Excel just doesn't seem able to handle the Memo field content at all. My code... ActiveCell = oRs.Fields("memoField") ...gives error Run-time error '3146': ODBC--call failed. Any suggestions? Can Excel VBA actually get at memo field data? Or is it just completely impossible. I get exactly the same error from GetChunk as well. ActiveCell = oRs.Fields("memoField").GetChunk(0, 2) ...also gives error Run-time error '3146': ODBC--call failed. Converting to a text field makes everything work fine. However some data is truncated to 255 characters of course, which means that isn't a workable solution.

    Read the article

  • Boosting my GA with Neural Networks and/or Reinforcement Learning

    - by AlexT
    As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze. That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as 1 0 0 1 0 1 0 1 0 1 1 1 0 0... How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome? In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me: (from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm) 1. Set parameter , and environment reward matrix R 2. Initialize matrix Q as zero matrix 3. For each episode: * Select random initial state * Do while not reach goal state o Select one among all possible actions for the current state o Using this possible action, consider to go to the next state o Get maximum Q value of this next state based on all possible actions o Compute o Set the next state as the current state End Do End For The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm? If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.

    Read the article

  • Wix - Upgrade always runs older installer msi and fails in trying to read old msi

    - by rkhj
    I'm having a problem though with the Windows caching of the installer. I'm trying to do an upgrade and each time the Windows installer is launching the installer of the older version. And when I do the upgrade it is complaining about problems with reading the older version's msi file (because its not in the same directory anymore). I did change the UpgradeCode and the ProductCode but kept the PackageCode the same. I also have different ProductVersion codes (2.2.3 vs 2.3.0). Here's a sample of my code: <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion Property="OLDAPPFOUND" IncludeMinimum="yes" Minimum="$(var.RTMProductVersion)" IncludeMaximum="no" Maximum="$(var.ProductVersion)"/> <UpgradeVersion Property="NEWAPPFOUND" IncludeMinimum="no" Minimum="$(var.ProductVersion)" OnlyDetect="yes"/> </Upgrade> This is the Install Sequence: <InstallExecuteSequence> <Custom Action='SetUpgradeParams' After='InstallFiles'>Installed AND NEWAPPFOUND</Custom> <Custom Action='Upgrade' After='SetUpgradeParams'>Installed AND NEWAPPFOUND</Custom> </InstallExecuteSequence> The error I am getting is: A network error occurred while attempting to read from the file: Thanks,

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • [Gray Hat Python] Simple debugger, want work ??

    - by Rami Jarrar
    hi, i'm reading the Gray Hat Python,, i reach for this :: class debugger(): def __init__(self): self.h_process = None self.pid = None self.debugger_active = False def load(self,path_to_exe): creation_flags = DEBUG_PROCESS startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() startupinfo.dwFlags = 0x1 startupinfo.wShowWindows = 0x0 startupinfo.cb = sizeof(startupinfo) if kernel32.CreateProcessA(path_to_exe, None, None, None, None, creation_flags, None, None, byref(startupinfo), byref(process_information)): print "[*] We have successfully launched the process!" print "[*] PID: %d"%(process_information.dwProcessId) self.h_process = self.open_process(process_information.dwProcessId) else: print "[*] Error: 0x%08x."%(kernel32.GetLastError()) def open_process(self,pid): h_process = self.open_process(pid) if kernel32.DebugActiveProcess(pid): self.debugger_active = True self.pid = int(pid) self.run() else: print "[*] Unable to attach to the process." def run(self): while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): raw_input("Press a Key to continue...") self.debugger_active = False kernel32.ContinueDebugEvent( \ debug_event.dwProcessId, \ debug_event.dwThreadId, \ continue_status ) def detach(self): if kernel32.DebugActiveProcessStop(self.pid): print "[*] Finished debugging. Exiting..." return True else: print "There was an error" return False when run my_test.py :: import my_dbg debugger = my_dbg.debugger() pid = raw_input('Enter the PID of the process to attach to: ') debugger.open_process(int(pid)) debugger.detach() i get this error :: Traceback (most recent call last): File "C:/Python26/dbgpy/my_test.py", line 5, in <module> debugger.attach(int(pid)) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) ........... ........... ........... File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) RuntimeError: maximum recursion depth exceeded its because the loop and something else, but what it is ?? I'm running on Windows using Python2.6.4.. :) Update:: i remove h_process = self.open_process(pid), but i get the same error for the next instruction if kernel32.DebugActiveProcess(pid) , so the problem i think in the loop while,, but what it is ???

    Read the article

  • Should I redesign my code when my colleague says so?

    - by Kirill V. Lyadvinsky
    I wrote a function recently (with help of one guy from SO) that finds maximum of two ints. Here is the code: long get_max (long(*a)(long(*)(long(*)()),long(*)(long(*)(long**))), long(*b)(long(*) (long(*)()),long*,long(*)(long(*)()))){return (long)((((long(*)(long(*)(long(*)()),long( *)(long(*)())))a)> ((long(*)(long(*)(long(*)()),long(*)(long(*)())))b))?((long(*)( long(*)(long(*)()),long(*)(long(*)())))a):((long(*)(long(*)(long(*)()),long(*)(long(*)( ))))b));} int main() { long x = get_max( (long(*)(long(*)(long(*)()),long(*)(long(*)(long**)))) 500, (long(*)(long(*)(long(*)()),long*,long(*)(long(*)()))) 100 ); cout << x << endl; // print 500 as expected return 0; } It works fine, but my colleague says that I shouldn't use C style casts. But I think that all that modern static_cast's and reinterpret_cast's will make my code too cumbersome. Who's right? Should I redesign my code using C++ style casts or is original code OK? EDIT: For those who marks this question as not a question I'll try to be more clear: should I use C++ style cast instead of C style cast in the code above?

    Read the article

  • WCF - Increase ReaderQuoatas on REST service

    - by Christo Fur
    I have a WCF REST Service which accepts a JSON string One of the parameters is a large string of numbers This causes the following error - which is visible by tracing and using SVC Trace Viewer There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Now I've read all sorts of articles advising how to rectify this All of them recommend increasing various config settings on the server and client e.g. http://stackoverflow.com/questions/65452/error-serializing-string-in-webservice-call http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1 So my config file looks like this <webHttpBinding> <binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" > <readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/> </binding> </webHttpBinding> ... ... ... <endpoint address="/" binding="webHttpBinding" bindingConfiguration="webBinding" My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object any ideas?

    Read the article

  • Jquery datepicker mindate, maxdate

    - by Cesar Lopez
    I have the two following datepicker objects: This is to restrict the dates to future dates. What I want: restrict the dates from current date to 30 years time. What I get: restrict the dates from current date to 10 years time. $(".datepickerFuture").datepicker({ showOn: "button", buttonImage: 'calendar.gif', buttonText: 'Click to select a date', duration:"fast", changeMonth: true, changeYear: true, dateFormat: 'dd/mm/yy', constrainInput: true, minDate: 0, maxDate: '+30Y', buttonImageOnly: true }); This is to restrict to select only past dates: What I want: restrict the dates from current date to 120 years ago time. What I get: restrict the dates from current date to 120 years ago time, but when I select a year the maximum year will reset to selected year (e.g. what i would get when page load from fresh is 1890-2010, but if I select 2000 the year select box reset to 1880-2000) . $(".datepickerPast").datepicker({ showOn: "button", buttonImage: 'calendar.gif', buttonText: 'Click to select a date', duration:"fast", changeMonth: true, changeYear: true, dateFormat: 'dd/mm/yy', constrainInput: true, yearRange: '-120:0', maxDate: 0, buttonImageOnly: true }); I need help with both datepicker object, any help would be very much appreciated. Thanks in advance. Cesar.

    Read the article

  • Hibernate: OutOfMemoryError persisting Blob when printing log message

    - by paul
    I have a Hibernate Entity: @Entity class Foo { //... @Lob public byte[] getBytes() { return bytes; } //.... } My VM is configured with a maximum heap size of 512 MB. When I try to persist an object which has a 75 MB large object, I get an OutOfMemoryError. The names of the methods in the stack trace (StringBuilder, ByteArrayBlobType.toLoggableString, pretty.Printer.toString) suggest that hibernate is trying to write a very large log message that contains my object. Am I correct about why hibernate is using so much memory? What is the simplest way to work around this problem? java.lang.OutOfMemoryError: Java heap space at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:44) at java.lang.StringBuilder.<init>(StringBuilder.java:81) at org.hibernate.type.ByteArrayBlobType.toString(ByteArrayBlobType.java:117) at org.hibernate.type.ByteArrayBlobType.toLoggableString(ByteArrayBlobType.java:127) at org.hibernate.pretty.Printer.toString(Printer.java:53) at org.hibernate.pretty.Printer.toString(Printer.java:90) at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:97) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) at org.jboss.seam.persistence.HibernateSessionProxy.flush(HibernateSessionProxy.java:181)

    Read the article

  • mod_wsgi for multiple trac projects [Windows]

    - by fampinheiro
    Hello, I have a system with windows server 2008, Apache httpd 2.2 and trac 0.11 i'm using mod_wsgi so the apache server do the web server job. Integration with Trac after read this site i found that the most suitable solution was the following (i have in my httpd.conf the line Include conf/extra/httpd-trac.conf) httpd-trac.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIDaemonProcess tracs processes=3 threads=25 maximum-requests=1000 RewriteEngine On RewriteCond %{REQUEST_URI} ^/trac/([^/]+) RewriteCond c:\Project\Services\Trac\%1\conf\trac.ini !-f RewriteRule . - [F] RewriteCond %{REQUEST_URI} ^/trac/([^/]+) RewriteRule . - [E=trac.env_path:c:\Project\Services\Trac\%1] WSGIScriptAliasMatch ^/trac/([^/]+) c:\Project\Trac\trac.wsgi <Directory c:\Project\Trac> WSGIProcessGroup tracs WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> the problem i encouter is the following: C:\Project\Apache\binhttpd.exe -k start Syntax error on line 3 of C:/Project/Apache/conf/extra/httpd-trac.conf: Invalid command 'WSGIDaemonProcess', perhaps misspelled or defined by a module not included in the server configuration The objective: My objective is to have multiple trac projects with diferente authentication information. If you have other solution than this please tell me =) Thank you for your help.

    Read the article

  • ProgrammingError when aggregating over an annotated & grouped Django ORM query

    - by ento
    I'm trying to construct a query to get the "average, maximum, minimum number of items purchased by a single user". The data source is this simple sales record table: class SalesRecord(models.Model): id = models.IntegerField(primary_key=True) user_id = models.IntegerField() product_code = models.CharField() price = models.IntegerField() created_at = models.DateTimeField() A new record is inserted into this table for every item purchased by a user. Here's my attempt at building the query: q = SalesRecord.objects.all() q = q.values('user_id').annotate( # group by user and count the # of records count=Count('id'), # (= # of items) ).order_by() result = q.aggregate(Max('count'), Min('count'), Avg('count')) When I try to execute the code, a ProgrammingError is raised at the last line: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM (SELECT sales_records.user_id AS user_id, COUNT(sales_records.`' at line 1") Django's error screen shows that the SQL is SELECT FROM (SELECT `sales_records`.`player_id` AS `player_id`, COUNT(`sales_records`.`id`) AS `count` FROM `sales_records` WHERE (`sales_records`.`created_at` >= %s AND `sales_records`.`created_at` <= %s ) GROUP BY `sales_records`.`player_id` ORDER BY NULL) subquery It's not selecting anything! Can someone please show me the right way to do this? Hacking Django I've found that clearing the cache of selected fields in django.db.models.sql.BaseQuery.get_aggregation() seems to solve the problem. Though I'm not really sure this is a fix or a workaround. @@ -327,10 +327,13 @@ # Remove any aggregates marked for reduction from the subquery # and move them to the outer AggregateQuery. + self._aggregate_select_cache = None + self.aggregate_select_mask = None for alias, aggregate in self.aggregate_select.items(): if aggregate.is_summary: query.aggregate_select[alias] = aggregate - del obj.aggregate_select[alias] + if alias in obj.aggregate_select: + del obj.aggregate_select[alias] ... yields result: {'count__max': 267, 'count__avg': 26.2563, 'count__min': 1}

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. - Apache 2.2.11 - MySQL 5.1.36 (INNODB engine) - PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios 1) IF I use a Low end PC ( low processor speed and low RAM) 2) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. 3) Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions 1) Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? 2) excessive file reading as in every 10 seconds ? 3) unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? 4) Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Bing maps silverlight control pushpin scaling problems.

    - by Rares Musina
    Baiscally my problem is that i've adapted a piece of code found here http://social.msdn.microsoft.com/Forums/en-US/vemapcontroldev/thread/62e70670-f306-4bb7-8684-549979af91c1 which does exactly what I want it to do, that is scale some pushpin images according to the map's zoom level. The only problem is that I've adapted this code to run with the bing maps silverlight control (not virtual earth like in the original example) and now the images scale correclty, but they are repositioned and only reach the desired position when my zoom level is maximum. Any idea why? Help will be greatly appreciated :) Modified code below: var layer = new MapLayer(); map.AddChild(layer); //Sydney layer.AddChild(new Pin { ImageSource = new BitmapImage(new Uri("pin.png", UriKind.Relative)), MapInstance = map }, new Location(-33.86643, 151.2062), PositionMethod.Center); becomes something like layer.AddChild(new Pin { ImageSource = new BitmapImage(new Uri("pin.png", UriKind.Relative)), MapInstance = map }, new Location(-33.92485, 18.43883), PositionOrigin.BottomCenter); I am assuming it has something to do with a different way in which bing maps anchors its UIelements. Details on that are also very userful. Thank you!

    Read the article

  • LINQ to SQL : Too much CPU Usage: What happens when there are multiple users.

    - by soldieraman
    I am using LINQ to SQL and seeing my CPU Usage sky rocketting. See below screenshot. I have three questions What can I do to reduce this CPU Usage. I have done profiling and basically removed everything. Will making every LINQ to SQL statement into a compiled query help? I also find that even with compiled queries simple statements like ByID() can take 3 milliseconds on a server with 3.25GB RAM 3.17GHz - this will just become slower on a less powerful computer. Or will the compiled query get faster the more it is used? The CPU Usage (on the local server goes to 12-15%) for a single user will this multiply with the number of users accessing the server - when the application is put on a live server. i.e. 2 users at a time will mean 15*2 = 30% CPU Usage. If this is the case is my application limited to maximum 4-5 users at a time then. Or doesnt LINQ to SQL .net share some CPU usage.

    Read the article

  • Classic ASP Session not working in IIS 7 Windows Server 2008 R2 x64

    - by user553361
    Hi, I've been googleing and searching here info about this but so far couldn't find anything relevant to my problem. We have a website currently working on II6 and Windows Server 2003 (x86) without any problem. Now we want to migrate our server to a Virtual Machine with Windows Server 2008 R2 (x64) and IIS7. Out current app is built in Classic ASP and SQL Server (This one located on a 2nd Server but this is staying the way it is now). The website is configured as a WebSite, not a virtual directory. Using DefaultAppPool with 4 applications. Now, the problem I'm getting is with the Sessions, or at least that's what I think since I created a simple hello.asp with this code <% response.write "Hello" response.write Session.SessionID %> And this is giving us this result: Hello error '8002801d' /hello.asp, line 3 ASP Sessions Properties Enable Session State : True Maximum Sessions : 2147483647 New ID On Secure Connection : True Time-out : 20 min This is the log in Event Viewer Warning 24/12/2010 14:03:42 Active Server Pages 9 None FailedReqLog Url http://apps.shocklogic.com:80/hello.asp App Pool DefaultAppPool Authentication anonymous User from token NT AUTHORITY\IUSR Activity ID {00000000-0000-0000-1400-0080000000F8} Site 1 Process 3312 Failure Reason STATUS_CODE Trigger Status 500 Final Status 500 Time Taken 110 msec Would be great if anyone has any ideas. Thanks, Federico

    Read the article

  • Why does the the Java VM not recover after "Too many open files" errors?

    - by Michael
    In certain well-understood circumstances, our application will open too many sockets (database connections) and reach the maximum open files that the OS allows. We understand this; we are fixing the issue and also bumping up the limit. What we can't explain is why parts of our application don't recover even after the number of connections abates and we're well within the limit. In this case, it's an application running under Tomcat. When this happens, we first start seeing "Too many open files" errors: SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310) at java.lang.Thread.run(Thread.java:619) Eventually, we start seeing NoClassDefFoundErrors inside an application thread that's trying to open HTTP connections: java.lang.NoClassDefFoundError: org/apache/commons/httpclient/protocol/ControllerThreadSocketFactory at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:128) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1349) [...] Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.protocol.ControllerThreadSocketFactory at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 8 more When the errant connections go away, the server starts accepting connections again, and everything seems ok, but we're left with the latter error constantly being spewed to stderr. Although the application typically logs unloaded classes to stdout, I don't see any such logs just before, during or after the "Too many open files" errors. My initial theory was that the Hotspot JVM would unload seemingly unused classes when it encounters "Too many open files," but if so, it doesn't log the fact. I'd also expect it to recover if that were the case. Platform details: Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode) Apache Tomcat Version 6.0.18

    Read the article

  • DDD: Aggregate Roots

    - by Mosh
    Hello, I need help with finding my aggregate root and boundary. I have 3 Entities: Plan, PlannedRole and PlannedTraining. Each Plan can include many PlannedRoles and PlannedTrainings. Solution 1: At first I thought Plan is the aggregate root because PlannedRole and PlannedTraining do not make sense out of the context of a Plan. They are always within a plan. Also, we have a business rule that says each Plan can have a maximum of 3 PlannedRoles and 5 PlannedTrainings. So I thought by nominating the Plan as the aggregate root, I can enforce this invariant. However, we have a Search page where the user searches for Plans. The results shows a few properties of the Plan itself (and none of its PlannedRoles or PlannedTrainings). I thought if I have to load the entire aggregate, it would have a lot of overhead. There are nearly 3000 plans and each may have a few children. Loading all these objects together and then ignoring PlannedRoles and PlannedTrainings in the search page doesn't make sense to me. Solution 2: I just realized the user wants 2 more search pages where they can search for Planned Roles or Planned Trainings. That made me realize they are trying to access these objects independently and "out of" the context of Plan. So I thought I was wrong about my initial design and that is how I came up with this solution. So, I thought to have 3 aggregates here, 1 for each Entity. This approach enables me to search for each Entity independently and also resolves the performance issue in solution 1. However, using this approach I cannot enforce the invariant I mentioned earlier. There is also another invariant that states a Plan can be changed only if it is of a certain status. So, I shouldn't be able to add any PlannedRoles or PlannedTrainings to a Plan that is not in that status. Again, I can't enforce this invariant with the second approach. Any advice would be greatly appreciated. Cheers, Mosh

    Read the article

  • Lucene stop words not removed during searching need a substitute for AnalyzingQueryParser

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser. Update portion of code that invokes AnalyzingQueryParser AnalyzingQueryParser parser = new AnalyzingQueryParser(Version.LUCENE_CURRENT,"description", analyzer); // fuzzy matching preparation String fuzzyStr = TextQuery.prepareFuzzy(tq.text, fuzzyDist); Query query = parser.parse(fuzzyStr); TopScoreDocCollector collector = TopScoreDocCollector.create(numHits, true); searcher.search(query, collector);

    Read the article

  • Fluid CSS: float column with overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've played around with a bare-minimum scenario to reproduce the problem and noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post text which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode.

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • WordPress > Optimizing a query to show recent posts with a "View All" link when postcount exceeds ma

    - by Scott B
    I have a setting in my theme that allows the site owner to set the maximum number of posts ($maxPosts) to display in a "Recent Posts" menu. I'm using a custom script to generate the recent posts (because the Recent Posts widget does not highlight the current page, which I need for my css). My menu also is set up to display a "View All" link below the post listing, but only if the actual post count is $maxposts I'm trying to work out the best method for getting the post count and comparing it to $maxposts in order to determine whether or not to show a "View All" link. I'm sure there's probably a better way, but here's my code. I'm looking to optimize it to support very large post counts... $cat=get_cat_ID('excludeFromRecentPosts'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { //show the menu if ($myrecentpostscount > $maxPosts) { //show "View All" link } } I really only need to determine if the total post count from the query is greater than the maxPost setting in order to determine whether to show the "View All" link, so I'm wondering if, in the case there are thousands of posts matching the criteria, to avoid performance issues, I don't need to get a count of all of them. I just need to count up until the point of maxPosts + 1, and that's where I'm struggling a bit because the user could elect to make maxPosts = -1 which means they want to show all posts. But this would be impractical, so I would probably set a upper limit of 20...

    Read the article

  • WPF: UnauthorizedAccessException using anonymous methods.

    - by Diego Pacheco Pedemonte
    Here is the thing, I want to create a simply app that copy many files from one site, and move them to another; but using async methods and create a new thread. private void button3_Click(object sender, RoutedEventArgs e) { //progressBar1.Maximum = _FileInfoArray.Count; DispatcherTimer dt1 = new DispatcherTimer(); foreach (FileInfo Fi in _FileInfoArray) { Thread t = new Thread(new ThreadStart(delegate() { DispatcherOperation _dispOp = progressBar1.Dispatcher.BeginInvoke(DispatcherPriority.Loaded, new Action(delegate() { File.Copy(txtdestino.Text, Fi.FullName, true); //progressBar1.Value = n; //txtstatus.Content = ("Copiados " + n.ToString() + " archivos"); //Thread.Sleep(100); } )); _dispOp.Completed += new EventHandler(_dispOp_Completed); } )); t.Start(); } } UnauthorizedAccessException is throw! It says that I can't access to txtdestino content. Some clues? -------------------------------------------------------------------------------Edited This is the version with all the changes, get the same error :( any clues? private void button4_Click(object sender, RoutedEventArgs e) { //First: Build mynames List<string> mynames = new List<string>(); foreach (FileInfo fi in _FileInfoArray) { mynames.Add(fi.FullName); } Thread t = new Thread(new ThreadStart(delegate() { foreach (string fullname in mynames) { DispatcherOperation _dispOp = progressBar1.Dispatcher.BeginInvoke(DispatcherPriority.Loaded, new Action(delegate() { string destino = System.IO.Path.Combine(@"C:\", System.IO.Path.GetFileName(fullname)); File.Copy(fullname, destino, true); //Some progressbar changes } )); _dispOp.Completed += new EventHandler(_dispOp_Completed); } } )); t.Start(); } File.Copy(txtdestino.Text, Fi.FullName, true); // here the exception is throw

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >