Search Results

Search found 699 results on 28 pages for 'lifetime learner'.

Page 19/28 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

  • How do you reconcile IDisposable and IoC?

    - by Mr. Putty
    I'm finally wrapping my head around IoC and DI in C#, and am struggling with some of the edges. I'm using the Unity container, but I think this question applies more broadly. Using an IoC container to dispense instances that implement IDisposable freaks me out! How are you supposed to know if you should Dispose()? The instance might have been created just for you (and therefor you should Dispose() it), or it could be an instance whose lifetime is managed elsewhere (and therefor you'd better not). Nothing in the code tells you, and in fact this could change based on configuration!!! This seems deadly to me. Can any IoC experts out there describe good ways to handle this ambiguity?

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • How do I properly handle a faulted WCF connection?

    - by mafutrct
    In my client program, there is a WCF connection that is opened at startup and supposedly stays connected til shutdown. However, there is a chance that the server closes due to unforeseeable circumstances (imagine someone pulling the cable). Since the client uses a lot of contract methods in a lot of places, I don't want to add a try/catch on every method call. I've got 2 ideas for handling this issue: Create a method that takes a delegate and executes the delegate inside a try/catch and returns an Exception in case of a known exception, or null else. The caller has to deal with nun-null results. Listen to the Faulted event of the underlying CommunicationObject. But I don't see how I could handle the event except for displaying some error message and shutting down. Are there some best practices for faulted WCF connection that exist for app lifetime?

    Read the article

  • Add columns to SqlWebEventProvider

    - by Julien N
    In an asp.net application, we'd like to use the SqlWebEventProvider to log any Event that occurs during the application lifetime. The problem is that we think that the table aspnet_WebEvent_Event doesn't provide enough columns and should log more information (we need to keep the Logged user). I'm aware that this information could be stored in the "Details" column but it wouldn't then be really simple to filter the results and build reports. So I'm searching for a simple solution to add a column. I wish I could derive SqlWebEventProvider but the methods used to build the stocked procedure parameters are private (PrepareParams() and FillParams()). Any simple solution that doesn't imply to rewrite the entire Provider class ?

    Read the article

  • Creating Service with Bluetooth activation in Android

    - by Mr. Kakakuwa Bird
    Hi I want to create a service in Android which will initially ask user if they want to start Bluetooth and set the Bluetooth discovery. My question is 1) Can I launch in the service following activities? if (!mBluetoothAdapter.isEnabled()) { Intent enableBtIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableBtIntent, 0); } // Set Phone Discoverable for 300 seconds. Intent discoverableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_DISCOVERABLE); discoverableIntent.putExtra(BluetoothAdapter.EXTRA_DISCOVERABLE_DURATION, 600); startActivity(discoverableIntent); 2) I want to set discoverabilty of the phone on for lifetime of application. Is it possible? 3) I want to access empty space available on SD card. How should i do it? Thanks in advance.

    Read the article

  • Guaranteed COM object release?

    - by Jurily
    I wrote the following code under the assumption that Excel will die with Monkey: class ExcelMonkey { private static Excel.Application xl = new Excel.Application(); public static bool parse(string filename) { if (filename.Contains("foo")) { var workbook = xl.Workbooks.Open(filename); var sheet = workbook.Worksheets.get_Item(1); // do stuff return true; } return false; } } How do I make sure it does? Do I need to release workbook and sheet separately? I want to have Excel around for the lifetime of the program, it's a huge performance improvement.

    Read the article

  • Loading all files in a directory in a Java applet

    - by WarrenB
    How would one go about programatically loading all the resource files in a given directory in a JAR file for an applet? The resources will probably change several times over the lifetime of the program so I don't want to hardcode names in. Normally I would just traverse the directory structure using File.list(), but I get permission issues when trying to do that within an applet. I also looked at using an enumeration with something line ClassLoader.getResources() but it only finds files of the same name within the JAR file. Essentially what I want to do is (something like) this: ClassLoader imagesURL = this.getClass().getClassLoader(); MediaTracker tracker = new MediaTracker(this); Enumeration<URL> images = imagesURL.getResources("resources/images/image*.gif"); while (images.hasMoreElements()){ tracker.add(getImage(images.nextElement(), i); i++; } I know I'm probably missing some obvious function, but I've spent hours searching through tutorials and documentation for a simple way to do this within an unsigned applet.

    Read the article

  • Owner Vs PArent and Taction shortcuts on Frames

    - by Fred
    I have a form with a panel. I create frames at runtime and display them on the panel by setting frame's parent property to the panel. When creating panels I do not set the owner property because i manage myself the lifetime of the frame. Until now i got no problem. Next I put an TActionList on the frame with some shortcuts on the actions. I found that my actions did not execute until I set the owner property of the frame to the panel. Can someone can explain me that ? I thought that owner property was just about wich component is responsible to free the children components, and not responsible to forward key events.

    Read the article

  • Class members allocation on heap/stack? C++

    - by simplebutperfect
    If a class is declared as follows: class MyClass { char * MyMember; MyClass() { MyMember = new char[250]; } ~MyClass() { delete[] MyMember; } }; And it could be done like this: class MyClass { char MyMember[250]; }; How does a class gets allocated on heap, like if i do MyClass * Mine = new MyClass(); Does the allocated memory also allocates the 250 bytes in the second example along with the class instantiation? And will the member be valid for the whole lifetime of MyClass object? As for the first example, is it practical to allocate class members on heap?

    Read the article

  • Why does it work

    - by A-ha
    Guys I've asked few days ago a question and didn't have really time to check it and think about it, but now I've tried one of the solutions and I can't understand why does it work? I mean why destructor is called at the end of line like this: #include "stdafx.h" #include "coutn.h" #define coutn coutn() int _tmain(int argc, _TCHAR* argv[]) { coutn << "Line one " << 1;//WHY DTOR IS CALLED HERE coutn << "Line two " << " and some text."; return 0; } I assume that it has something to do with lifetime of an object but I'm not sure what and how. As I think of it there are two unnamed objects created but they do not go out of scope so I can't understand for what reason is dtor called. Thank you.

    Read the article

  • Running out of memory but not seeing excessive object allocation in Instruments

    - by Scotty Allen
    I have an iPad app that's crashing due to low memory. However, Instruments doesn't show any significant amount of memory allocated using ObjectAlloc - it stays under 1MB for the lifetime of the application. Leaks shows less than 1kB leaked over the course of the run. Memory monitor shows the free memory on the devices drop significantly with use, eventually dropping to the point that it's out of memory. Here's a screenshot from Instruments: I'm totally stumped. As far as I can tell, this basically says that as far as my app is concerned, I'm never using more than about 750kB, but that the device is still running out of physical memory, which is causing my app to crash/force exit. I'm new to debugging memory issues with XCode. Am I measuring this wrong? Is there another way to see where this memory is going?

    Read the article

  • How to create a communication layer in android

    - by Palo
    I want to create a communication layer in android. The layer will communicate with server asynchronously. Multiple activities should be able to call methods of the communication layer. The layer will get messages from the server (it is not important for the scope of this question how) and should be able to tell activities to do some work based on these messages. How should I implement this? Should I do this using android Service? The main questions that I need to answer are: How can activities access the layer? How can the layer access activities? How can i make the communication layer live for the lifetime of the application?

    Read the article

  • Cisco ASA5505 8.2 Multiple Outside IP to Multiple Inside IP

    - by GriffJ
    Trying to setup ASA5505. Semi working but having issues with accessing services from the outside. ASA5505 Basic License, Version 8.2. (plus upgrade to unlimited inside hosts). Alert: I'm a Cisco Noob. 321.321.39.X is a place holder for privacy. I came up with this config and tested it tonight. ASA Version 8.2(1) ! hostname <removed> domain-name <removed> enable password <removed> encrypted passwd <removed> encrypted names ! interface Vlan1 nameif inside security-level 100 ip address 172.21.36.1 255.255.252.0 ! interface Vlan2 nameif outside security-level 0 ip address 321.321.39.10 255.255.255.248 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name <removed> access-list outside_inbound extended permit tcp any host 321.321.39.10 eq pptp access-list outside_inbound extended permit tcp any host 321.321.39.11 eq https access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 993 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq smtp access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 1001 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 465 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq domain access-list outside_inbound extended permit udp any eq domain host 321.321.39.11 eq domain access-list outside_inbound extended permit tcp any host 321.321.39.12 eq www access-list outside_inbound extended permit tcp any host 321.321.39.12 eq https access-list outside_inbound extended permit tcp any host 321.321.39.13 eq www access-list outside_inbound extended permit tcp any host 321.321.39.13 eq https access-list outside_inbound extended permit icmp any any echo-reply access-list outside_inbound extended permit icmp any any source-quench access-list outside_inbound extended permit icmp any any unreachable access-list outside_inbound extended permit icmp any any time-exceeded access-list outside_inbound extended permit icmp any any traceroute access-list outside_inbound extended permit icmp any any echo pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 2 321.321.39.11-321.321.39.14 netmask 255.255.255.248 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface pptp 172.21.37.20 pptp netmask 255.255.255.255 static (inside,outside) 321.321.39.11 172.21.37.14 netmask 255.255.255.255 static (inside,outside) 321.321.39.12 172.21.37.24 netmask 255.255.255.255 static (inside,outside) 321.321.39.13 172.21.37.17 netmask 255.255.255.255 access-group outside_inbound in interface outside route outside 0.0.0.0 0.0.0.0 321.321.39.9 1 route inside 192.168.15.0 255.255.255.0 172.21.36.52 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 172.21.36.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 172.21.36.0 255.255.252.0 inside telnet timeout 60 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect pptp inspect ipsec-pass-thru inspect http ! service-policy global_policy global prompt hostname context The servers that had static forwards did not have any outside network access. couldn't ping google.com for instance. mail server couldn't Domain POP the Barracuda spam filter from our ISP etc. So after doing some reading I removed the statics for 172.21.37.11, 12 and 13, and replaced those three with what's below.. static (inside,outside) tcp 321.321.39.11 https 172.21.37.14 https netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 993 172.21.37.14 993 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 smtp 172.21.37.14 smtp netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 1001 172.21.37.14 1001 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 465 172.21.37.14 465 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 domain 172.21.37.14 domain netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.12 www 172.21.37.24 www netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.12 https 172.21.37.24 https netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.13 www 172.21.37.17 www netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.13 https 172.21.37.17 https netmask 255.255.255.255 Now the servers (for instance 172.21.37.14) could ping the outside world again. Mail started flowing (Domain POP was successful) etc. etc. But I forgot to check if webmail worked from the outside admittedly. But the webservers at 172.21.37.17 and 172.21.37.24 still didn't respond from the outside world. Although I was able to PPTP VPN in on 321.321.39.10 (interface) which is the outside interface IP address. and it is static mapped to 172.21.37.20. So I'm thinking there must be something wrong with NAT somewhere? no response from 321.321.39.11 to 321.321.39.14.. Could anyone look over the config and please let me know what I've done wrong? Is there something I've missed? well obviously but.. please help! Thank you.

    Read the article

  • Any open source hosting site for abandoned projects?

    - by ssg
    I have some projects which I have ceased their development a long time ago but still get code access requests for. I'm currently providing zipped packages from my personal web site. I think zipped packages are far from being useful (e.g. can't read code right away, can't provide url's to individual source files, can't fork easily, lifetime is dependent on my own web page's). I want that archaic code to be present on the net regardless I keep my web page up or not. I saw the question "What's the best open source hosting site?". However, most sites request the project "to be active", Codeplex for instance. I didn't go through EULA's of all providers to see if they allow abandoned projects. Are there elephant's graveyards for old code without activity restrictions? Which one would you pick, why?

    Read the article

  • How to gather usage statistics for iPhone app?

    - by FX
    I am in the process of releasing my first iPhone app. It's a simple utility, I'd just like to gauge the release process, app lifetime and trends, just so it can help make more realistic choices in future apps. I think it would be nice to have usage statistics in addition to download stats from Apple. For example, how many times is the app opened by each user, what iPhone OS version do they have, etc. I think some of it would simply be to try and connect to a known URL on one of my domains, passing it anonymous information (let's say, connect to http://mydomain.net/stats?app=myApp&version=1.0.0&os=3.1.2&used=18). My questions are: is that forbidden in any way by Apple's rules? (none that I could find, at least) does that seem reasonable to you? are there existing frameworks that would do that simpler/better that writing my own code?

    Read the article

  • DI with disposable objects

    - by sunnychaganty
    Suppose my repository class looks like this: class myRepository : IDisposable{ private DataContext _context; public myRepository(DataContext context){ _context = context; } public void Dispose(){ // to do: implement dispose of DataContext } } now, I am using Unity to control the lifetime of my repository & the data context & configured the lifetimes as: DataContext - singleton myRepository - create a new instance each time Does this mean that I should not be implementing the IDisposable on the repository to clean up the DataContext? Any guidance on such items?

    Read the article

  • Symfony caching question (caching a partial)

    - by morpheous
    I am using Symfony 1.3.2 and I have a page that uses a partial from another module. I have two modules: 'foo' and 'foobar'. In module 'foo', I have an 'index' action, which uses a partial from the 'foobar' module. so foo/indexSuccess.php looks something like this: Some data here ? I want to cache 'part2' of my foo/indexSuccess.php page, because it is very expensive (slow). I want the cache to have a lifetime of about 10 minutes. In apps/frontend/modules/foo/config/cache.yml I need to know how to cache 'part2' of the page (i.e. the [very expensive] partial part of the page. can anyone tell me what entries are required in the cache.yml file?

    Read the article

  • Cookies ignored on Apache/Magento installation

    - by Laizer
    I'm running a Magento based website store on Linux/Apache. In order that user logins are maintained, I've set my cookie lifetime to be close to two years. The cookies are sent out with the right times, I can see them in my browsers. When I visit the site from a previously logged-in browser after about a day, the user is logged out. I can still see the cookies, with their extended life, present in the browser. Where should I start looking to get to the bottom of this issue?

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • TFS Automated Builds to Code Packages

    - by Adam Jenkin
    I would like to hear the best practices or know how people perform the following task in TFS 2008. I am intending on using TFS for building and storing web applications projects. Sometimes these projects can contain 100's of files (*.cs, *.acsx etc) During the lifetime of the website, a small bug will get raised resulting in say a stylesheet change, and a change to default.aspx.cs for example. On checking in these changes to TFS, and automated build would be triggered (great!), however for deploying the changes to the target production machine, I only need to deploy for example: style.css default.asx MyWebApplications.dll So my question is, can MSBuild be customized to generate a "code pack" of only the files which require deploying to the production server based on the changeset which cause the re-build?

    Read the article

  • Issued by DNOA service access token parsing and validating in Java application

    - by Regfor
    I am creating OAuth 2.0 access token using DotNetOpenAuth, like here public AccessTokenResult CreateAccessToken( IAccessTokenRequest accessTokenRequestMessage) { var token = new AuthorizationServerAccessToken(); token.Lifetime = TimeSpan.FromMinutes(10); var signCert = LoadCert(Config.STS_CERT); token.AccessTokenSigningKey = (RSACryptoServiceProvider) signCert.PrivateKey; var encryptCert = LoadCert(Config.SERVICE_CERT); token.ResourceServerEncryptionKey = (RSACryptoServiceProvider) encryptCert.PublicKey.Key; var result = new AccessTokenResult(token); return result; } Token issued by this method looks like: { "access_token": "gAAAAH44atDAyWeu8BFwhLof7rtBRpiZrSlAC0zci8xU81tXHZDVkBX8LXrMLDHDYfimjuSOsdrXQIAY7Xf4JnK1x_fo_JSmvuiA5CvO5JUJNuEmHNSlR4ePO4tBPkOHQnN50DIRJMbHJdQrFZCqqaWz6s0iuvCuTMcTua6J0yaTPQaD9AAAAIAAAADHgef78SHh4-K2aZ87xYRoRFfmQ0lc3ET7Y5vAS7BadLM5btYvmrSkAWsCxhUji92D0LbKgyVkbQuuw5LnRP_zsxe_W_VztTqZ5m9PwJDL6q7McrUfiVQj_XBQqpv2slBeouD0F1k1KjVedR9Pwm7ganz4R7dmeYivnx8f0_isEGBqSZrtnILoit3SOCPyVxmIwizYwLE2bQOtlwVpqtrBMyzc4MVPVyaSiJb2-Lj5tOftEWl0k93Qmr8uzmjDyeCn3TsFX0f_qFgCmxp32_kt4ZTMf4zgmh5yUS1Hy7ERNQxpCIxRTx9yma7JN_K5Pss", "token_type": "bearer", "expires_in": 43200, } I need to know whether Java application will be able to parse and validate token issued in such manner?

    Read the article

  • Log4net rollingfile appender skipping every other day in asp.net mvc app

    - by tasty_spider_men
    Hi, I've set up some log4net logging on an asp.net mvc application i've had running for a little over a month now. I've set up rolling file (and smtp) appender on it. The application is hit several thousand times a day - alot of actions are logged. On top of these a number of batch jobs are run as part of the application that also write to the log. Now - as i've been occupied with other work, i've not attended to this application much assuming things to be working based on my test setup. Unfortunately i discover that the rolling file appender on my production server seems to be skipping every other day. Ie i have logs : 10/4/2010, 10/4/2010.1, 10/4/2010.2, 12/4/2010, 12/4/2010.1 , 14/4/2010, 14/4/2010.1 etc. Any idea what could be causing this? It is absolutely impossible that there has no action on these odd days for the lifetime of the application. Cheers

    Read the article

  • Security for web services only used from a Silverlight application?

    - by Lasse V. Karlsen
    I have googled a bit for how I should handle security in a web service application when the application is basically the data repository for a Silverlight application, but have gotten inconclusive results. The Silverlight application is not supposed to have its own user authentication, since it will be reachable only through a web application that the user have already authenticated to get into. As such, I was thinking I could simply add a parameter to the SL application that is a cookie-type value, with a certain lifetime, linked to the user in the database. The SL application would then have to pass this value alongside other parameters to the web services. Since the web service is hopefully going to be a generic web service endpoint, few methods, adding an extra parameter at this level will not be a problem. But, am I supposed to roll this system on my own? It sounds to me as this isn't exactly new features that nobody has considered before, so what are my options?

    Read the article

  • Searching for a complex and well-designed PHP OOP application to learn from

    - by Raveren
    Basically, I am diving ever deeper into complex programming practices. I've almost no friends that are experienced (or more experienced than me) programmers to learn from, so I am looking for the next best thing - learning from the work of strangers. Can anyone recommend a real world finished and working application written well and OOP-centered. I'd like to take and analyze its source. Bonus if it's based on Zend Framework. What I am interested most in is objects that unlike desktop applications, have only one real operation done to them (or to their representation in DB or session) during their lifetime (or pageload), like user-logIn(). I'm interested in optimal and reusable design patterns and their real life implementations.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >