Search Results

Search found 8354 results on 335 pages for 'welton v3 50'.

Page 16/335 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Strange Domain name under the same IP Address

    - by Mike Chip
    There's something really weird happening in my server. But first things first: I wanted to have my website and chose the domain name "myowndomain.com", Now on my domain registrar I point "myowndomain.com" to the address of my recently setup VPS, let's say 50.50.50.50 So I installed everything I needed to run my website, and I started to notice strange queries coming from different IP Addresses. Like these [client 123.123.123.123] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php [client 456.456.456.456] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php or like this (Really a long line, I cut some things) GET /?s=vod-show-id-22-area-%E5%85%B6%E4%BB%96-language-%E9%9F%A9%E8%AF%AD.html HTTP/1.1" 301 295 "http://v.strangedomain.com/?s=vod-s ...[cut]... spider" That above is happening the most. The 'strangedomain.com' returns the same IP address of my VPS which my website is hosted on. The whois of such domain shows it's registered to a chinese. But the street name didn't look so right (like a huge single word), so I think all of that info might be fake, but still might be a chinese. I also noticed that all 'clients' trying to access the 'strangedomain.com' is coming from china. If I type in the browser 'strangedomain.com', I see my website. I'm worried, because my website is actually an e-commerce. I don't know if 'strangedomain.com' WAS a website on 50.50.50.50 in the not so far past, or if it's something else.

    Read the article

  • Auth problem on Facebook using Ruby/sinatra/frankie/facebooker

    - by user84584
    Hello guys, I'm using sinatra/frankie/facebooker to prototype something simple to test the facebook api, i'm using mmangino-facebooker the more recent version from github and I cloned the most recent version of frankie. I'm using sinatra 0.9.6. My main code is as simple as possible: before do ensure_application_is_installed_by_facebook_user @user = session[:facebook_session].user @photos = session[:facebook_session].get_photos(nil,@user.uid,nil) end get "/" do erb :index end get "/:uid/:image" do |uid,image| @photo_selected = session[:facebook_session].get_photos([image.to_i],nil,nil) erb :selected end The index page just renders a link to the other one (identified by regex "/:uid/:image") however I always get an error when it's trying to render the one identified by regex "/:uid/:image" Facebooker::Session::MissingOrInvalidParameter: Invalid parameter /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:610:in `process' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/parser.rb:30:in `parse' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/service.rb:67:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:600:in `post_without_logging' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:611:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/logging.rb:20:in `log_fb_api' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:610:in `post' /Library/Ruby/Gems/1.8/gems/mmangino-facebooker-1.0.50/lib/facebooker/session.rb:198:in `secure!' ./config/frankie/lib/frankie.rb:66:in `secure_with_token!' ./config/frankie/lib/frankie.rb:44:in `set_facebook_session' ./config/frankie/lib/frankie.rb:164:in `ensure_authenticated_to_facebook' ./config/frankie/lib/frankie.rb:169:in `ensure_application_is_installed_by_facebook_user' I've no idea why, it seems to be related with the auth token I guess.. I logged the request made o the fb rest server: {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:call_id="1269003766.05665", :sig="194469457d1424dc8ba0678979692363", :method="facebook.photos.get", :subj_id=750401957, :session_key="2.lXL0z3s4_r573xzQwAiA9A__.3600.1269010800-750401957", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} {:sig="4f244d1f510498f4efaae3c03d036a85", :generate_session_secret="0", :method="facebook.auth.getSession", :auth_token="9dae0d02c19c680b574c78d202b0582a", :api_key="70c14732815ace0ae71a561ea5eb38b7", :v="1.0"} The last one gives the error, it could be related with auth_token having the same value in the 1st and on the 3rd ? Cheers and tks, Ze Maria

    Read the article

  • Pseudocode help

    - by vatsag
    Hello All I've been banging my head over few hours for realising this particular logic My task is to have a masterlist ListM elements of which shall be list(again) in the form of ListA(type A) and ListB(type B) and ListC(type C)(some generic type). Also the lists A and ListB can contain elements of type C(some generic type). Coming to the preconditions 1. ListA can contain only 50 elements of type 'A', rest of the elements can be of generic type C 2. ListB can contain only 50 elements of type 'B', rest of the elements can be of generic type C 3. Also the lists(ListA or ListB) can take at max of 100 elements. 4. ListC can contain elements of generic type C and can take max of 100 elements For ex: Now, If i have 200 elements of type A and 200 elements of type B and 600 elements of type C then i should be able to get 4 Lists(50 elements in each list) of type...ListA (because each ListA can contain at max of 50 elements of type A only, rem 50 is typeC) 4 Lists(50 elements in each list) of type...ListB (because each ListA can contain at max of 50 elements of type B only, rem 50 is typeC) 2 Lists(50 elements in each list) of type...ListC i shall be glad to explain it again if my explanation is quite confusing. Can anyone suggest a better way for implementing such a requirement in the form of a pseudocode ? Thanks in advance VATSAG

    Read the article

  • sp_addlinkserver using trigger

    - by Nanda
    I have the following trigger, which causes an error when it runs: CREATE TRIGGER ... ON ... FOR INSERT, UPDATE AS IF UPDATE(STATUS) BEGIN DECLARE @newPrice VARCHAR(50) DECLARE @FILENAME VARCHAR(50) DECLARE @server VARCHAR(50) DECLARE @provider VARCHAR(50) DECLARE @datasrc VARCHAR(50) DECLARE @location VARCHAR(50) DECLARE @provstr VARCHAR(50) DECLARE @catalog VARCHAR(50) DECLARE @DBNAME VARCHAR(50) SET @server=xx SET @provider=xx SET @datasrc=xx SET @provstr='DRIVER={SQL Server};SERVER=xxxxxxxx;UID=xx;PWD=xx;' SET @DBNAME='[xx]' SET @newPrice = (SELECT STATUS FROM Inserted) SET @FILENAME = (SELECT INPUT_XML_FILE_NAME FROM Inserted) IF @newPrice = 'FAIL' BEGIN EXEC master.dbo.sp_addlinkedserver @server, '', @provider, @datasrc, @provstr EXEC master.dbo.sp_addlinkedsrvlogin @server, 'true' INSERT INTO [@server].[@DBNAME].[dbo].[maildetails] ( 'to', 'cc', 'from', 'subject', 'body', 'status', 'Attachment', 'APPLICATION', 'ID', 'Timestamp', 'AttachmentName' ) VALUES ( 'P23741', '', '', 'XMLFAILED', @FILENAME, '4', '', '8', '', GETDATE(), '' ) EXEC sp_dropserver @server END END The error is: Msg 15002, Level 16, State 1, Procedure sp_MSaddserver_internal, Line 28 The procedure 'sys.sp_addlinkedserver' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_addlinkedsrvlogin, Line 17 The procedure 'sys.sp_addlinkedsrvlogin' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_dropserver, Line 12 The procedure 'sys.sp_dropserver' cannot be executed within a transaction. How can I prevent this error from occurring?

    Read the article

  • iPhone doesn't save password for Cisco IPsec VPN using racoon daemon

    - by dsx
    On my Debian server I had set up racoon daemon (1:0.8.0-14) for Cisco IPSec VPN using certificates for authentication. My racoon.conf is like following: log info; path certificate "/etc/racoon/certs"; listen { isakmp $SERVER_IP_HERE [500]; isakmp_natt $SERVER_IP_HERE [4500]; } timer { natt_keepalive 10 sec; } remote anonymous { lifetime time 24 hours; proposal_check obey; passive on; exchange_mode aggressive,main; my_identifier asn1dn; peers_identifier asn1dn; verify_identifier on; certificate_type x509 "cert_name.crt" "key_name.key"; ca_type x509 "ca.crt"; mode_cfg on; verify_cert on; ike_frag on; generate_policy on; nat_traversal on; dpd_delay 20; proposal { encryption_algorithm aes; hash_algorithm sha1; authentication_method xauth_rsa_server; dh_group modp1024; } } mode_cfg { conf_source local; auth_source system; auth_throttle 3; save_passwd on; dns4 8.8.8.8; network4 $SOME_LAN_SUBNET; netmask4 255.255.255.0; pool_size 128; } sainfo anonymous { pfs_group 2; lifetime time 24 hour; encryption_algorithm aes; authentication_algorithm hmac_sha1; compression_algorithm deflate; } I'm not using PSK authentication here. Using iPhone configuration utility I had uploaded all required certificates to iPhone and set up VPN on demand. Everything works just fine except one thing: iPhone refuses to save VPN password regardless of save_passwd on; in racoon configuration file. As opposed to iPhone behaviour, Mac OS X 10.8.2 have no problems saving password. I had examined iPhone log file and found following: racoon[151] <Notice>: >>>>> phase change status = phase 1 established configd[50] <Notice>: IPSec Network Configuration started. configd[50] <Notice>: IPSec Network Configuration: INTERNAL-IP4-ADDRESS = $SUBNET_IP_HERE. configd[50] <Notice>: IPSec Network Configuration: INTERNAL-IP4-MASK = 255.255.255.0. configd[50] <Notice>: IPSec Network Configuration: SAVE-PASSWORD = 0. configd[50] <Notice>: IPSec Network Configuration: INTERNAL-IP4-DNS = 8.8.8.8. configd[50] <Notice>: IPSec Network Configuration: BANNER = . configd[50] <Notice>: IPSec Network Configuration: DEF-DOMAIN = . configd[50] <Notice>: IPSec Network Configuration: DEFAULT-ROUTE = local-address $SUBNET_IP_HERE/32. configd[50] <Notice>: IPSec Phase2 starting. configd[50] <Notice>: IPSec Network Configuration established. configd[50] <Notice>: IPSec Phase1 established. Please note IPSec Network Configuration message containing SAVE-PASSWORD = 0.. Is it a bug in racoon daemon on server, or iPhone (iOS version is 6.0.1 (10A523)) or it is me missing something? How to make iPhone remember IPSec VPN password?

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • Iptables on ubuntu Ubuntu 10.04.1 not working

    - by Kevin
    I am trying to block an IP address from accessing my server by using iptables, but didn't succeed. Here are the commands that I used. (after these commands, I still keep seeing 50.18.12.86 sending request to my Apache server). sudo iptables -F sudo iptables -I OUTPUT -s 50.18.12.86 -j REJECT sudo iptables -I INPUT -s 50.18.12.86 -j REJECT sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination REJECT all -- 50.18.12.86 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REJECT all -- 50.18.12.86 0.0.0.0/0 reject-with icmp-port-unreachable I have tried DROP instead of REJECT, but doesn't help.

    Read the article

  • Why there suddenly were so many 400 request in my access log?

    - by LotusH
    Below are little part of my access_log 118.186.8.50 - - [19/Dec/2011:22:42:57 +0800] "-" 400 0 "-" "-" 05 118.186.8.50 - - [19/Dec/2011:22:42:57 +0800] "-" 400 0 "-" "-" 06 118.186.8.50 - - [19/Dec/2011:22:42:57 +0800] "-" 400 0 "-" "-" 07 118.186.8.50 - - [19/Dec/2011:22:42:57 +0800] "-" 400 0 "-" "-" 08 118.186.8.50 - - [19/Dec/2011:22:42:57 +0800] "-" 400 0 "-" "-" 09 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" 10 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" 11 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" 12 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" 13 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" 14 220.173.136.39 - - [19/Dec/2011:22:43:22 +0800] "-" 400 0 "-" "-" And the volume was very huge, some like one hundred thousand of these 400 request per second. And I'm pretty sure there are no errors on my site in that period of time.(No error report and I didn't change the source code)

    Read the article

  • Google Maps Polygon - Not creating independent

    - by ferronrsmith
    I am basically populating a map with polygons, I am using the flex sdk. The problem I am having is that each time it adds a parish it keep adding to the previous polygons and treating it as one. HELP!! <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <maps:Map xmlns:maps="com.google.maps.*" id="map" mapevent_mapready="onMapReady(event)" width="100%" height="100%" key="ABQIAAAAGe0Fqwt-nY7G2oB81ZIicRT2yXp_ZAY8_ufC3CFXhHIE1NvwkxRcFmaI_t1gtsS5UN6mWQkH9kIw6Q"/> <mx:Script> <![CDATA[ import com.google.maps.Color; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; import com.google.maps.controls.ZoomControl; import com.google.maps.overlays.Polygon; import com.google.maps.overlays.PolygonOptions; import com.google.maps.styles.FillStyle; import com.google.maps.styles.StrokeStyle; import mx.controls.Alert; import mx.utils.ColorUtil; import mx.utils.object_proxy; private var polys:Array = new Array(); private var labels:Array = new Array(); // private var pts:Array = new Array(); // private var poly:Polygon; // private var map:Map; private function onMapReady(event:Event):void { map.setCenter(new LatLng(18.070146,-77.225647), 9, MapType.NORMAL_MAP_TYPE); map.addControl(new ZoomControl()); map.enableScrollWheelZoom(); map.enableContinuousZoom(); getXml(); } public function getXml():void { var xmlString:URLRequest = new URLRequest("parish.xml"); var xmlLoader:URLLoader = new URLLoader(xmlString); xmlLoader.addEventListener("complete", readXml); } private function readXml(event:Event):void { var markersXML:XML = new XML(event.target.data); var markers:XMLList = markersXML..parish; //var markersCount:int = markers.length(); var i:Number; var t:Number; for(i=0; i < markers.length(); i++) { var marker:XML = markers[i]; var name:String = marker.@name; var colour:String = marker.@colour; // Alert.show(""); var the_p:XMLList = markers..point; var pts:Array = []; for(t=0; t < the_p.length(); t++) { var theparish:XML = the_p[t]; pts[t] = new LatLng(theparish.@lat,theparish.@lng); // Alert.show("working" + theparish.@lat); // var pts.push(new LatLng(theparish.@lat,theparish.@lng)); } var poly:Polygon = new Polygon(pts, new PolygonOptions({ strokyStyle: new StrokeStyle({ color: colour, thickness: 1, alpha: 0.2}), fillStyle: new FillStyle({ color: colour, alpha: 0.2}) })); //polys.push(poly); //labels.push(name); Alert.show("this"); pts = [] map.addOverlay(poly); } } /* public function createMarker(latlng:LatLng, name:String, address:String, type:String): Marker { var marker:Marker = new Marker(latlng, new MarkerOptions({icon: new customIcons[type], iconOffset: new Point(-16, -32)})); var html:String = "<b>" + name + "</b> <br/>" + address; marker.addEventListener(MapMouseEvent.CLICK, function(e:MapMouseEvent):void { marker.openInfoWindow(new InfoWindowOptions({contentHTML:html})); }); return marker; } */ ]]> </mx:Script> </mx:Application>

    Read the article

  • GlassFish build failed

    - by hp1
    Hello, Each time I try to deploy my server side code, the build fails. If I try to restart my machine, the build is successful but fails later when I try to build the subsequent times. I get the following Severe messages when I attempt to build: SEVERE: "IOP00410216: (COMM_FAILURE) Unable to create IIOP listener on the specified host/port: localhost/3820" org.omg.CORBA.COMM_FAILURE: vmcid: SUN minor code: 216 completed: No WARNING: Can not find resource bundle for this logger. class name that failed: org.glassfish.enterprise.iiop.impl.IIOPSSLSocketFactory WARNING: Exception getting SocketInfo java.lang.RuntimeException: Orb initialization eror WARNING: "IOP02310202: (OBJ_ADAPTER) Error in connecting servant to ORB" org.omg.CORBA.OBJ_ADAPTER: vmcid: SUN minor code: 202 completed: No Following are the details for the severe method: SEVERE: "IOP00410216: (COMM_FAILURE) Unable to create IIOP listener on the specified host/port: localhost/3820" org.omg.CORBA.COMM_FAILURE: vmcid: SUN minor code: 216 completed: No at com.sun.corba.ee.impl.logging.ORBUtilSystemException.createListenerFailed(ORBUtilSystemException.java:3835) at com.sun.corba.ee.impl.logging.ORBUtilSystemException.createListenerFailed(ORBUtilSystemException.java:3855) at com.sun.corba.ee.impl.transport.SocketOrChannelAcceptorImpl.initialize(SocketOrChannelAcceptorImpl.java:98) at com.sun.corba.ee.impl.transport.CorbaTransportManagerImpl.getAcceptors(CorbaTransportManagerImpl.java:247) at com.sun.corba.ee.impl.transport.CorbaTransportManagerImpl.addToIORTemplate(CorbaTransportManagerImpl.java:264) at com.sun.corba.ee.spi.oa.ObjectAdapterBase.initializeTemplate(ObjectAdapterBase.java:131) at com.sun.corba.ee.impl.oa.toa.TOAImpl.<init>(TOAImpl.java:130) at com.sun.corba.ee.impl.oa.toa.TOAFactory.getTOA(TOAFactory.java:114) at com.sun.corba.ee.impl.orb.ORBImpl.connect(ORBImpl.java:1740) at com.sun.corba.ee.spi.presentation.rmi.StubAdapter.connect(StubAdapter.java:212) at com.sun.corba.ee.impl.orb.ORBImpl.getIOR(ORBImpl.java:2194) at com.sun.corba.ee.impl.orb.ORBImpl.getFVDCodeBaseIOR(ORBImpl.java:966) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.glassfish.gmbal.generic.FacetAccessorImpl.invoke(FacetAccessorImpl.java:126) at org.glassfish.gmbal.impl.MBeanImpl.invoke(MBeanImpl.java:440) at org.glassfish.gmbal.impl.AttributeDescriptor.get(AttributeDescriptor.java:144) at org.glassfish.gmbal.impl.MBeanSkeleton.getAttribute(MBeanSkeleton.java:569) at org.glassfish.gmbal.impl.MBeanSkeleton.getAttributes(MBeanSkeleton.java:625) at org.glassfish.gmbal.impl.MBeanImpl.getAttributes(MBeanImpl.java:389) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:726) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:665) at org.glassfish.admin.amx.util.jmx.MBeanProxyHandler.getAttributes(MBeanProxyHandler.java:273) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler.attributesMap(AMXProxyHandler.java:1193) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler.attributesMap(AMXProxyHandler.java:1203) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler.handleSpecialMethod(AMXProxyHandler.java:414) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler._invoke(AMXProxyHandler.java:792) at org.glassfish.admin.amx.core.proxy.AMXProxyHandler.invoke(AMXProxyHandler.java:526) at $Proxy107.attributesMap(Unknown Source) at org.glassfish.admin.amx.core.AMXValidator._validate(AMXValidator.java:642) at org.glassfish.admin.amx.core.AMXValidator.validate(AMXValidator.java:1298) at org.glassfish.admin.amx.impl.mbean.ComplianceMonitor$ValidatorThread.doRun(ComplianceMonitor.java:256) at org.glassfish.admin.amx.impl.mbean.ComplianceMonitor$ValidatorThread.run(ComplianceMonitor.java:227) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at org.glassfish.enterprise.iiop.impl.IIOPSSLSocketFactory.createServerSocket(IIOPSSLSocketFactory.java:293) at com.sun.corba.ee.impl.transport.SocketOrChannelAcceptorImpl.initialize(SocketOrChannelAcceptorImpl.java:91) ... 32 more

    Read the article

  • Is Google Maps API V3 good enough to use now?

    - by Haroldo
    Firstly, only reply if you have experience using API V3 (i can speculate myself!) I had a little go with V3 and it looked great but would love to hear from someone who's given it a bit of use before I start working with it and deploy it on a live site. I'm only looking to do very basic things: put markers on a map custom markers info bubbles It all looks very easy with v3: http://www.svennerberg.com/2009/06/google-maps-api-3-the-basics/ is it stable enough?

    Read the article

  • MySQl Error #1064

    - by 01010011
    Hi, I keep getting this error: MySQL said: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_na' at line 15 with this query: USE books; DROP TABLE IF EXISTS book; CREATE TABLE `books`.`book`( `book_id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, `isbn10` VARCHAR(15) NOT NULL, `isbn13` VARCHAR(15) NOT NULL, `title` VARCHAR(50) NOT NULL, `edition` VARCHAR(50) NOT NULL, `author_f_name` VARCHAR(50) NOT NULL, `author_m_name` VARCHAR(50) NOT NULL, `author_l_name` VARCHAR(50) NOT NULL, `cond` ENUM('as new','very good','good','fair','poor') NOT NULL, `price` DECIMAL(8,2) NOT NULL, `genre` VARCHAR(50) NOT NULL, `quantity` INT NOT NULL) INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_name,author_l_name,cond,price,genre,quantity)** VALUES ('0136061699','978-0136061694','Software Engineering: Theory and Practice','4','Shari','Lawrence','Pfleeger','very good','50','Computing','2'); Any idea what the problem is?

    Read the article

  • how to code: "floating image" google maps control

    - by Ygam
    look at this url there is the "floating image", the one with arrow that when clicked centers the map on a certain location. I have been desperately trying to find out the way to do that, and I have coded my google maps app with just that one thing missing. any ideas?

    Read the article

  • #1366 - Incorrect integer value:MYsql

    - by rajanikant
    hi every one i have a problem in mysql my table is CREATE TABLE IF NOT EXISTS `contactform` ( `contact_id` int(11) NOT NULL AUTO_INCREMENT, `first_name` varchar(50) NOT NULL, `addition` varchar(50) NOT NULL, `surname` varchar(50) NOT NULL, `Address` varchar(200) NOT NULL, `postalcode` varchar(20) NOT NULL, `city` varchar(50) NOT NULL, `phone` varchar(20) NOT NULL, `emailaddress` varchar(30) NOT NULL, `dob` varchar(50) NOT NULL, `howtoknow` varchar(50) NOT NULL, `othersource` varchar(50) NOT NULL, `orientationsession` varchar(20) NOT NULL, `othersession` varchar(20) NOT NULL, `organisation` int(11) NOT NULL, `newsletter` int(2) NOT NULL, `iscomplete` int(11) NOT NULL, `registrationdate` date NOT NULL, PRIMARY KEY (`contact_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=39 ; mysql>insert into contactform values('','abhi','sir','shukla','vbxcvb','342342','asdfasd','234234234','[email protected]','1999/5/16','via vrienden of familie','','19','20','6','1','1','2010-03-29') i get following error. #1366 - Incorrect integer value: '' for column 'contact_id' at row 1 this query work fine on my local machine but give error on server

    Read the article

  • bash: how to know NUM option in grep -A -B "on the fly" ?

    - by Michael Mao
    Hello everyone: I am trying to analyze my agent results from a collection of 20 txt files here. If you wonder about the background info, please go see my page, what I am doing here is just one step. Basically I would like to take only my agent's result out of the messy context, so I've got this command for a single file: cat run15.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' This means : after the regex match, continue forward by 50 lines, stop, then match a line separator starts with "==", go back by 50 lines, if possible (This would certainly clash the very first line). This approach depends on the fact that the hard-coded line number counter 50, would be just fine to get exactly one line separator. And this would not work if I do the following code: cat run*.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' The output would be a mess... My question is: how to make sure grep knows exactly when to stop going forward, and when to stop getting backward? Any suggestion or hint is much appreciated.

    Read the article

  • bash: hwo to know NUM option in grep -A -B "on the fly" ?

    - by Michael Mao
    Hello everyone: I am trying to analyze my agent results from a collection of 20 txt files here. If you wonder about the background info, please go see my page, what I am doing here is just one step. Basically I would like to take only my agent's result out of the messy context, so I've got this command for a single file: cat run15.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' This means : after the regex match, continue forward by 50 lines, stop, then match a line separator starts with "==", go back by 50 lines, if possible (This would certainly clash the very first line). This approach depends on the fact that the hard-coded line number counter 50, would be just fine to get exactly one line separator. And this would not work if I do the following code: cat run*.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' The output would be a mess... My question is: how to make sure grep knows exactly when to stop going forward, and when to stop getting backward? Any suggestion or hint is much appreciated.

    Read the article

  • Import-Pssession is not importing cmdlets when used in a custom module

    - by Douglas Plumley
    I have a PowerShell script/function that works great when I use it in my PowerShell profile or manually copy/paste the function in the PowerShell window. I'm trying to make the function accessible to other members of my team as a module. I want to have the module stored in a central place so we can all add it to our PSModulePath. Here is a copy of the basic function: Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } If I save this function in my PowerShell profile it works fine. I can dot source a *.ps1 script with this function in it and it works as well. The issue is when I save the function as a *.psm1 PowerShell script module. The function runs fine but none of the exported commands from the Import-PSSession are available. I think this may have something to do with the module scope. I'm looking for suggestions on how to get around this. EDIT When I create the following module and run Connect-O365 the imported cmdlets will not be available. $scriptblock = { Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } } New-Module -Name "Office 365" -ScriptBlock $scriptblock When I import the next module without the Connect-O365 function the imported cmdlets are available. $scriptblock = { $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } New-Module -Name "Office 365" -ScriptBlock $scriptblock This appears to be a scope issue of some sort, just not sure how to get around it.

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Use cases of [ordered], the new PowerShell 3.0 feature

    - by Roman Kuzmin
    PowerShell 3.0 CTP1 introduces a new feature [ordered] which is somewhat a shortcut for OrderedDictionary. I cannot imagine practical use cases of it. Why is this feature really useful? Can somebody provide some useful examples? Example: this is, IMHO, rather demo case than practical: $a = [ordered]@{a=1;b=2;d=3;c=4} (I do not mind if it is still useful, then I am just looking for other useful cases). I am not looking for use cases of OrderedDictionary, it is useful, indeed. But we can use it directly in v2.0 (and I do a lot). I am trying to understand why is this new feature [ordered] needed in addition. Collected use cases from answers: $hash = [ordered]@{} is shorter than $hash = New-Object System.Collections.Specialized.OrderedDictionary N.B. ordered is not a real shortcut for the type. New-Object ordered does not work. N.B. 2: But this is still a good shortcut because (I think, cannot try) it creates typical for PowerShell case insensitive dictionary. The equivalent command in v2.0 is too long, indeed: New-Object System.Collections.Specialized.OrderedDictionary]([System.StringComparer]::OrdinalIgnoreCase)

    Read the article

  • Pass Memory in GB Using Import-CSV Powershell to New-VM in Hyper-V Version 3

    - by PowerShell
    I created the below function to pass memory from a csv file to create a VM in Hyper-V Version 3 Function Install-VM { param ( [Parameter(Mandatory=$true)] [int64]$Memory=512MB ) $VMName = "dv.VMWIN2K8R2-3.Hng" $vmpath = "c:\2012vms" New-VM -MemoryStartupBytes ([int64]$memory*1024) -Name $VMName -Path $VMPath -Verbose } Import-Csv "C:\2012vms\Vminfo1.csv" | ForEach-Object { Install-VM -Memory ([int64]$_.Memory) } But when i try to create the VM it says mismatch between the memory parameter passed from import-csv, i receive an error as below VERBOSE: New-VM will create a new virtual machine "dv.VMWIN2K8R2-3.Hng". New-VM : 'dv.VMWIN2K8R2-3.Hng' failed to modify device 'Memory'. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) Invalid startup memory amount assigned for 'dv.VMWIN2K8R2-3.Hng'. The minimum amount of memory you can assign to a virtual machine is '8' MB. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) A parameter that is not valid was passed to the operation. At line:48 char:9 + New-VM -ComputerName $HyperVHost -MemoryStartupBytes ([int64]$memory*10 ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [New-VM], VirtualizationOpe rationFailedException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.NewVMCommand Also please not in the csv file im passing memory as 1,2,4.. etc as shown below, and converting them to MB by multiplying them with 1024 later Memory 1 Can Anyone help me out on how to format and pass the memory details to the function

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • Nagios check_bgp_neighbors plugin showing critical status

    - by user141610
    I am trying to configure nagios check_bgp_neighbors plug-in on Ubuntu and followed README file of check_bgp_neighbors plug-in. I have made following changes: define command{ command_name check_bgp_all command_line $USER1$/check_bgp_neighbors -H $HOSTADDRESS$ -C $USER3$ -n $ARG1$ -n $ARG2$ } to define command{ command_name check_bgp_all command_line /usr/local/nagios/libexec/check_bgp_neighbors.sh -H xx.xx.xx.49 -C snmpName -n xx.xx.xx.50 And define service{ use server-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!10.0.0.1!172.16.0.2 } to define service{ use generic-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!xx.xx.xx.50 } xx.xx.xx.49 is the IP of the host router and xx.xx.xx.50 is the IP of eBGP neighbour. Status information: line: neighbor:xx.xx.xx.50:sent:78838:received:9769 Failed: status:6 prefixes:16 sent:0 received:1 Log [1353997904] SERVICE NOTIFICATION: router1;router1;BGP CHECK 2;CRITICAL;notify-service-by-email;line: neighbor:103.7.248.50:sent:78842:received:9772 [1353997904] SERVICE NOTIFICATION: router1;router1;BGP CHECK 2;CRITICAL;notify-service-by-sms;line: neighbor:103.7.248.50:sent:78842:received:9772 Why does it show critical status???? I am not getting response for this question, if you need additional information please mention it in comment.

    Read the article

  • Cisco 2911 - problem with DHCP

    - by bluszcz
    Hi, i am configuring DHCP daemon on Cisco 2911. At some point i assigned address 192.168.50.50 to one box (using MAC address 'relation'). When I wanted to saved config - I got warning, that address 192.168.50.50 is already in Pool, which sounds bit weird - it was first host which I started configuring. I tried with following commands: clear ip dhcp server statistics clear ip dhcp conflict but first one doesn't output anything (and show statistics shows that there are 2 addresses in pool), and second one throws "there is no conflicted ips" message. How I can force/purge/clear/allow to bind 192.168.50.50 to this box?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >