Search Results

Search found 4533 results on 182 pages for 'castle proxy'.

Page 173/182 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • (resolved) empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null UPDATED: I've found a problem, it was about cross-domain requests. I knew that there are restrictions, but thought they're relaxed for local filesystem and local servers. (and expected more descriptive error message, anyway) Thanks everybody!

    Read the article

  • generated service mock: everything but RhinoMocks fails?

    - by hko
    I have the "quest" to search for the next Mocking Framework for my company, and basically it's down to NSubstitute (simplest syntax, but no strict mocks), FakeItEasy(best reviews, Roy Osherove bonus, and slightly better lib support than NSubstitute), Moq (best "other libs support", biggest featureset, downside: mock.Object). We definitely want to move on from RhinoMocks, e.g. because of the unusefull interactiontest error messages (it should tell me what the parameter was instead, when a verification fails). So I was pretty surprised the other day (that was yesterday) when I found out RhinoMocks could do a thing where every other mock framework fails at: Mocking an autogenerated SomethingService (a typical VS autogenerated service with a default construtor in a partial class). Please don't argue about the design.. I intend to write lightweight integration tests (and some unit tests), and I can't mess around with the service, the product is installed on too many customers system. See this code: // here the NSubstitute and FakeItEasy equivalents throw an exception.. see below TicketStoreService fakeTicketStoreService = MockRepository.GenerateMock<TicketStoreService>(); fakeTicketStoreService.Expect(service => service.DoSomething(Arg.Is(new Guid())).Return(new Guid()); fakeTicketStoreService.DoSomething(Arg.Is(new Guid())); fakeTicketStoreService.VerifyAllExpectations(); Note that DoSomething is a non-virtual methodcall in an autogenerated class. So it shouldn't work, according to common knowledge. But it does. Problem is that it's the only (non commercial) framework that can do this: Rhino.Mocks works, and verification works too FakeItEasy says it doesn't find a default constructor (probably just wrong exception message): No default constructor was found on the type SomeNamespace.TicketStoreService Moq gives something sane and understandable: Invalid setup on a non-virtual (overridable in VB) member: service=> service.DoSomething Nsubstitute gives a message System.NotSupportedException: Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface. I'm really wondering what's going on here with the frameworks, except Moq. The "fancy new" frameworks seem to have an initial perf hit too, probably preparing some Type cache and serializing stuff, whilst RhinoMocks somehow manages to create a very "slim" mock without recursion. I have to admit I didn't like RhinoMocks very well, but here it shines.. unfortunately. So, is there a way to get that to work with newer (non-commercial!) mocking frameworks, or somehow get a sane error message out of Rhino.Mocks? And why can Rhino.Mocks achieve this, when clearly every Mocking framework states it can only work with virtual methods when given a concrete class? Let's not derail the discussion by talking about alternative approaches like Extract&Override or runtime-proxy Mocking frameworks like JustMock/TypeMock/Moles or the new Fakes framework, I know these, but that would be less ideal solutions, for reasons beyond this topic. Any help appreciated..

    Read the article

  • Retrieving/Updating Entity Framework POCO objects that already exist in the ObjectContext

    - by jslatts
    I have a project using Entity Framework 4.0 with POCOs (data is stored in SQL DB, lazyloading is enabled) as follows: public class ParentObject { public int ID {get; set;} public virtual List<ChildObject> children {get; set;} } public class ChildObject { public int ID {get; set;} public int ChildRoleID {get; set;} public int ParentID {get; set;} public virtual ParentObject Parent {get; set;} public virtual ChildRoleObject ChildRole {get; set;} } public class ChildRoleObject { public int ID {get; set;} public string Name {get; set;} public virtual List<ChildObject> children {get; set;} } I want to create a new ChildObject, assign it a role, then add it to an existing ParentObject. Afterwards, I want to send the new ChildObject to the caller. The code below works fine until it tries to get the object back from the database. The newChildObjectInstance only has the ChildRoleID set and does not contain a reference to the actual ChildRole object. I try and pull the new instance back out of the database in order to populate the ChildRole property. Unfortunately, in this case, instead of creating a new instance of ChildObject and assigning it to retreivedChildObject, EF finds the existing ChildObject in the context and returns the in-memory instance, with a null ChildRole property. public ChildObject CreateNewChild(int id, int roleID) { SomeObjectContext myRepository = new SomeObjectContext(); ParentObject parentObjectInstance = myRepository.GetParentObject(id); ChildObject newChildObjectInstance = new ChildObject() { ParentObject = parentObjectInstance, ParentID = parentObjectInstance.ID, ChildRoleID = roleID }; parentObjectInstance.children.Add(newChildObjectInstance); myRepository.Save(); ChildObject retreivedChildObject = myRepository.GetChildObject(newChildObjectInstance.ID); string assignedRoleName = retreivedChildObject.ChildRole.Name; //Throws exception, ChildRole is null return retreivedChildObject; } I have tried setting MergeOptions to Overwrite, calling ObjectContext.Refresh() and ObjectContext.DetectChanges() to no avail... I suspect this is related to the proxy objects that EF injects when working with POCOs. Has anyone run into this issue before? If so, what was the solution?

    Read the article

  • Spring transaction management breaks hibernate cascade

    - by TimmyJ
    I'm having a problem where the addition of spring's transaction management to an application causes Hibernate to throw the following error: org.hibernate.HibernateException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: org.fstrf.masterpk.domain.ReportCriteriaBean.treatmentArms org.hibernate.engine.Collections.processDereferencedCollection(Collections.java:96) org.hibernate.engine.Collections.processUnreachableCollection(Collections.java:39) org.hibernate.event.def.AbstractFlushingEventListener.flushCollections(AbstractFlushingEventListener.java:218) org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:77) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.springframework.orm.hibernate3.SpringSessionSynchronization.beforeCommit(SpringSessionSynchronization.java:135) org.springframework.transaction.support.TransactionSynchronizationUtils.triggerBeforeCommit(TransactionSynchronizationUtils.java:72) org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerBeforeCommit(AbstractPlatformTransactionManager.java:905) org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:715) org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:701) org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:321) org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:116) org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) $Proxy92.saveNewReportCriteria(Unknown Source) org.fstrf.masterpk.domain.logic.MasterPkFacade.saveNewReportCriteria(MasterPkFacade.java:134) org.fstrf.masterpk.controllers.ReportCriteriaController.setupReportType(ReportCriteriaController.java:302) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:585) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java:413) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:134) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:310) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:297) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:809) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511) javax.servlet.http.HttpServlet.service(HttpServlet.java:710) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) I'm using Spring 2.5 and annotations to implement this management. Here is the class containing the saveNewReportCriteria method (which, as can be seen by the stack trace, is causing the error) @Transactional( propagation = Propagation.REQUIRED, isolation = Isolation.DEFAULT, readOnly = false) public class HibernateReportCriteriaDao implements ReportCriteriaDao{ private HibernateTemplate hibernateTemplate; public Integer saveNewReportCriteria(ReportCriteriaBean reportCriteria) { hibernateTemplate.save(reportCriteria); List<Integer> maxIdList = hibernateTemplate.find("SELECT max(id) from ReportCriteriaBean"); logger.info("ID of newly saved list is: " + maxIdList.get(0)); return maxIdList.get(0); } public void setHibernateTemplate(HibernateTemplate hibernateTemplate) { this.hibernateTemplate = hibernateTemplate; } } Then I added the following sections to my configuration files to tell spring that I am using annotation driven transaction management: <bean id="actgDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/actg" /> <property name="resourceRef" value="true" /> </bean> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="actgDataSource" /> </bean> <tx:annotation-driven/> I'm pretty sure that the de-referencing error is being caused due to the proxy class that Spring AOP creates and uses in order to handle transaction management, but I have no idea how I'd go about fixing it.

    Read the article

  • Detecting abuse for post rating system

    - by Steven smethurst
    I am using a wordpress plugin called "GD Star Rating" to allow my users to vote on stories that I post to one of my websites. http://everydayfiction.com/ Recently we have been having a lot of abuse of the system. Stories that have obviously been voted up artificially. "GD Star Rating" creates some detailed logs when a user votes on a story. Including; IP, Time of vote, and user_adgent, ect.. For example this story has 181 votes with an average of 5.7 http://www.everydayfiction.com/snowman-by-shaun-simon/ Most other stories only get around ~40 votes each day. At first I thought that the story got on to a social bookmarking site Digg, Stumbleupon ect... but after checking the logs I found that this story is getting the same amount of traffic that a normal story gets ~2k-3k. I checked if all the votes for this perpendicular story where coming from a the same IP address. I could see this happening if a user was at a school's computer lab using all their lab computers to vote up this story. Not one duplicate IP address in the log for this story. SELECT ip, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY (ip ) ORDER BY count DESC Next I thought that a use might be using a proxy to vote up a story. I checked this by grouping all the browser user_agent together to see if there a single browser voting in a perpendicular way. At most 7 users where using a similar browser but voted sporadically (1-5), no evidence of wrong doing. SELECT user_agent, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY ( user_agent) ORDER BY count DESC I check was to see if all the votes came in at a once. Maybe someone has a really interesting bot that can change the user_adgent and uses proxies, ect... At most 5 votes came with in 2 mins of each other. It doesn't seem to be any regularity on how people vote (IE a 5 vote does not come in once a min) SELECT * FROM wp_gdsr_votes_log WHERE id =3932 AND vote=5 ORDER BY wp_gdsr_votes_log.voted DESC The obvious solution to this problem is to force people to login before they are allowed to vote. But I would prefer to not have to go down that route unless it is absolutely necessary. I'm looking for suggestions on things to test for to detect the abuse.

    Read the article

  • Hibernate MappingException Unknown entity: $Proxy2

    - by slynn1324
    I'm using Hibernate annotations and have a VERY basic data object: import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class State implements Serializable { /** * */ private static final long serialVersionUID = 1L; @Id private String stateCode; private String stateFullName; public String getStateCode() { return stateCode; } public void setStateCode(String stateCode) { this.stateCode = stateCode; } public String getStateFullName() { return stateFullName; } public void setStateFullName(String stateFullName) { this.stateFullName = stateFullName; } } and am trying to run the following test case: public void testCreateState(){ Session s = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction t = s.beginTransaction(); State state = new State(); state.setStateCode("NE"); state.setStateFullName("Nebraska"); s.save(s); t.commit(); } and get an org.hibernate.MappingException: Unknown entity: $Proxy2 at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:121) .... I haven't been able to find anything referencing the $Proxy part of the error - and am at a loss.. Any pointers to what I'm missing would be greatly appreciated. hibernate.cfg.xml <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:hsql://localhost/xdb</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="current_session_context_class">thread</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="show_sql">true</property> <property name="hbm2ddl.auto">update</property> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <mapping class="com.test.domain.State"/> in HibernateUtil.java public static SessionFactory getSessionFactory(boolean testing ) { if ( sessionFactory == null ){ try { String configPath = HIBERNATE_CFG; AnnotationConfiguration config = new AnnotationConfiguration(); config.configure(configPath); sessionFactory = config.buildSessionFactory(); } catch (Exception e){ e.printStackTrace(); throw new ExceptionInInitializerError(e); } } return sessionFactory; }

    Read the article

  • Deserialization error in a new environment

    - by cerhart
    I have a web application that calls a third-party web service. When I run it locally, I have no problems, but when I move it to my production environment, I get the following error: There is an error in XML document (2, 428). Stack: at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at RMXClasses.RMXContactService.ContactService.getActiveSessions(String user, String pass) in C:\Users\hp\Documents\Visual Studio 2008\Projects\ReklamStore\RMXClasses\Web References\RMXContactService\Reference.cs:line 257 at I have used the same web config file from the production environment but it still works locally. My local machine is a running vista home edition and the production environment is windows server 2003. The application is written in asp.net 3.5, wierdly under the asp.net config tab in iis, 3.5 doesn't show up in the drop down list, although that version of the framework is installed. The error is not being thrown in my code, it happens during serialization. I called the method on the proxy, I have checked the arguments and they are OK. I have also logged the SOAP request and response, and they both look OK as well. I am really at a loss here. Any ideas? SOAP log: This is the soap response that the program seems to have trouble parsing only on server 2003. On my machine the soap is identical, and yet it parses with no problems. SoapResponse BeforeDeserialize; <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:ContactService" xmlns:ns2="http://api.yieldmanager.com/types" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:getActiveSessionsResponse> <sessions SOAP-ENC:arrayType="ns2:session[1]" xsi:type="ns2:array_of_session"> <item xsi:type="ns2:session"> <token xsi:type="xsd:string">xxxxxxxxxxxxxxxxxxxx1ae12517584b</token> <creation_time xsi:type="xsd:dateTime">2009-09-25T05:51:19Z</creation_time> <modification_time xsi:type="xsd:dateTime">2009-09-25T05:51:19Z</modification_time> <ip_address xsi:type="xsd:string">xxxxxxxxxx</ip_address> <contact_id xsi:type="xsd:long">xxxxxx</contact_id></item></sessions> </ns1:getActiveSessionsResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>

    Read the article

  • Unable to read data from the transport connection: the connection was closed

    - by webdreamer
    The exception is Remoting Exception - Authentication Failure. The detailed message says "Unable to read data from the transport connection: the connection was closed." I'm having trouble with creating two simple servers that can comunicate as remote objects in C#. ServerInfo is just a class I created that holds the IP and Port and can give back the address. It works fine, as I used it before, and I've debugged it. Also the server is starting just fine, no exception is thrown, and the channel is registered without problems. I'm using Forms to do the interfaces, and call some of the methods on the server, but didn't find any problems in passing the parameters from the FormsApplication to the server when debugging. All seems fine in that chapter. public ChordServerProgram() { RemotingServices.Marshal(this, "PADIBook"); nodeInt = 0; } public void startServer() { try { serverChannel = new TcpChannel(serverInfo.Port); ChannelServices.RegisterChannel(serverChannel, true); } catch (Exception e) { Console.WriteLine(e.ToString()); } } I run two instances of this program. Then startNode is called on one of the instances of the application. The port is fine, the address generated is fine as well. As you can see, I'm using the IP for localhost, since this server is just for testing purposes. public void startNode(String portStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); startServer(); //node = new ChordNode(serverInfo,this); } Then, in the other istance, through the interface again, I call another startNode method, giving it a seed server to get information from. This is where it goes wrong. When it calls the method on the seedServer proxy it just got, a RemotingException is thrown, due to an authentication failure. (The parameter I'll want to get is the node, I'm just using the int to make sure the ChordNode class has nothing to do with this error.) public void startNode(String portStr, String seedStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); IPAddress addressSeed = IPAddress.Parse("127.0.0.1"); Int32 portSeed = Int32.Parse(seedStr); ServerInfo seedInfo = new ServerInfo(addressSeed, portSeed); startServer(); ChordServerProgram seedServer = (ChordServerProgram)Activator.GetObject(typeof(ChordServerProgram), seedInfo.GetFullAddress()); // node = new ChordNode(serverInfo,this); int seedNode = seedServer.nodeInt; // node.chordJoin(seedNode.self); }

    Read the article

  • HttpWebRequest possibly slowing website

    - by Steven Smith
    Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce Hey guys I have Recently implemented a new review service into a current site we have. When the change went live the first day all worked fine. Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong. In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code. var email = HttpUtility.UrlEncode(order.Customer.Email.ToString()); var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString()); var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product"); var orderRef = HttpUtility.UrlEncode(order.Id.ToString()); var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-"))); string itemRef = ""; try { itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString()); } catch { itemRef = "0"; } var url = string.Format("feefo Url", login, password,email,name,description,orderRef,productLink,itemRef); var request = (HttpWebRequest)WebRequest.Create(url); request.KeepAlive = false; request.Timeout = 5000; request.Proxy = null; using (var response = (HttpWebResponse)request.GetResponse()) { if (response.StatusDescription == "OK") { var stream = response.GetResponseStream(); if(stream != null) { using (var reader = new StreamReader(stream)) { var content = reader.ReadToEnd(); } } } } So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo. Now: this hasnt been happening all week since the 15th (day of the implementation) the site has been grinding to a halt recently. The stream and reader in the the var content is there for debugging. Im wondering does the code redflag anything to you that could relate to the process of website? Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots. Any help would be much appreciated! EDIT: also note that this code is in a method that is called and wrapped in A try catch UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week

    Read the article

  • Google Maps API - Marker not showing

    - by popnbrown
    I'm trying to add markers for every single row from a table, that sits on the page. The page is http://www.sravanks.com/first/2013ftcmap.php This is the JS code that's loading the markers: $(document).ready(function() { var mapOptions = { center: new google.maps.LatLng(39.740, -89.503), zoom: 7 }; var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions); var infowindow = new google.maps.InfoWindow(); /* City Markers */ var cityCircle = new Array(); var i = 0; $.each($('.events tr'), function(index, value) { var name = $(this).find('td:first()').html(); var address = $(this).find('.address').html(); var linkUrl = "http://www.sravanks.com/first/geocode.php?address=" + address; $.ajax({ url: linkUrl }).done(function(data){ var json = $.parseJSON(data.substring(0, data.length-1)); lat = json.results[0].geometry.location.lat; lng = json.results[0].geometry.location.lng; var latlng = new google.maps.LatLng(lat, lng); var marker = new google.maps.Marker({ position: latlng, map: map, icon: 'map-pointer-medium.gif' }); google.maps.event.addListener(marker, 'click', function() { infowindow.setContent(name); infowindow.open(map, marker); cityCircle[i] = new google.maps.Circle({strokeColor: '#00FF00', strokeOpacity: 0.8, strokeWeight: 2, fillColor: '#00FF00', fillOpacity: 0.35, map: map, center: latlng, radius: 144841}); i++; }); }); }); /*Team Markers*/ var markers = {}; var teamName, teamNumber, lat, lng, content; $.each($('.list tr'), function(index, value) { teamName = $(this).find('td.name').html(); teamNumber = $(this).find('td.number').html(); markers[teamNumber] = {}; lat = parseFloat($(this).find('td.lat').html()); lng = parseFloat($(this).find('td.lng').html()); content = "Name: " + teamName + "<br />Number: " + teamNumber; markers[teamNumber]['latlng'] = new google.maps.LatLng(lat, lng); markers[teamNumber]['marker'] = new google.maps.Marker({ position: markers[teamNumber]['latlng'], map: map }); google.maps.event.addListener(markers[teamNumber]['marker'], 'click', function() { infowindow.setContent(content); infowindow.open(map, markers[teamNumber]['marker']); }); }); google.maps.event.addListener(infowindow, 'closeclick', function() { for(var i=0;i<cityCircle.length;i++){ cityCircle[i].setMap(null); } }); }); I've got no errors, but the Team Markers do not show up. The strange thing is that the City Markers do show up. Some more info, the City Markers ajax call is just to a proxy that calls the google geocoding api. Again the link's at http://www.sravanks.com/first/2013ftcmap.php

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • RequestFactoryEditorDriver getting edited data after flush

    - by Deanna
    Let me start with I have a solution, but I don't really think it is elegant. So, I am looking for a cleaner way to do this. I have an EntityProxy displayed in a view panel. The view panel is a RequestFactoryEditorDriver only using display mode. The user clicks on a data element and opens a popup editor to edit a data element of the EntityProxy with a few more bits of data than is displayed in the view panel. When the user saves the element I need the view panel to update the display. I ran into a problem because the RequestFactoryEditorDriver of the popup editor flow doesn't let you get to the edited data. The driver uses the passed in context and sends it to the server. The context returned out of flush only allows a Receiver even if you cast it to the type of context you stored in the editor driver in the edit() call. It doesn't appear to send and EntityProxyChanged event either, so I couldn't listen for that and update the display view. The solution I found was to change my domain object persist to return the newly saved entity. Then create the popup editor like this editor.getSaveButtonClickHandler().addClickHandler(createSaveHandler(driver, editor)); // initialize the Driver and edit the given text. driver.initialize(rf, editor); PlayerProfileCtx ctx = rf.playerProfile(); ctx.persist().using(playerProfile).with(driver.getPaths()) .to(new Receiver<PlayerProfileProxy>(){ @Override public void onSuccess(PlayerProfileProxy profile) { editor.hide(); playerProfile = profile; viewDriver.display(playerProfile); } }); driver.edit(playerProfile, ctx); editor.centerAndShow(); Then in the save handler I just fire the context I get from the flush. While this approach works, it doesn't seem right. It would seem I should subscribe to the entitychanged event in the display view and update the entity and the view from there. Also this approach saves the complete entity, not just the changed bits, which will increase bandwidth usage. What I would think should happen, is when you flush the entity it should 'optimistically' update the rf managed version of the entity and fire the entity proxy changed event. Only reverting the entity if something went wrong in the save. The actual save should only send the changed bits. In this way there isn't a need to refetch the whole entity and send that complete data over the wire twice. Is there a better solution?

    Read the article

  • Update information outdated

    - by Achim Krause
    I have a warning triangle that my update information is outdated, the last update was 12 days ago. I use Ubuntu 11.10. A run of sudo apt-get update produces the following output: Ign http://ppa.launchpad.net oneiric InRelease Ign http://de.archive.ubuntu.com oneiric InRelease Ign http://de.archive.ubuntu.com oneiric-updates InRelease Ign http://de.archive.ubuntu.com oneiric-backports InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://extras.ubuntu.com oneiric InRelease Hit http://ppa.launchpad.net oneiric Release.gpg Get:1 http://de.archive.ubuntu.com oneiric Release.gpg [198 B] Hit http://security.ubuntu.com oneiric-security Release.gpg Get:2 http://extras.ubuntu.com oneiric Release.gpg [72 B] Hit http://ppa.launchpad.net oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release.gpg Hit http://security.ubuntu.com oneiric-security Release Hit http://extras.ubuntu.com oneiric Release Err http://extras.ubuntu.com oneiric Release Hit http://ppa.launchpad.net oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric-backports Release.gpg Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://linux.dropbox.com oneiric InRelease Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release Ign http://de.archive.ubuntu.com oneiric Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://linux.dropbox.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/universe Translation-en Ign http://de.archive.ubuntu.com oneiric/main Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/main i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse i386 Packages/DiffIndex Hit http://linux.dropbox.com oneiric Release Get:3 http://de.archive.ubuntu.com oneiric/main TranslationIndex [3,289 B] Get:4 http://de.archive.ubuntu.com oneiric/multiverse TranslationIndex [2,265 B] Hit http://de.archive.ubuntu.com oneiric/restricted TranslationIndex Get:5 http://de.archive.ubuntu.com oneiric/universe TranslationIndex [2,640 B] Hit http://de.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/main Sources Hit http://linux.dropbox.com oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://linux.dropbox.com oneiric/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://de.archive.ubuntu.com oneiric-backports/universe Sources Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://de.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric/restricted Sources Hit http://de.archive.ubuntu.com oneiric/universe Sources Hit http://de.archive.ubuntu.com oneiric/multiverse Sources Hit http://de.archive.ubuntu.com oneiric/main i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric-updates/main Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/restricted Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/universe Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/multiverse Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/main i386 Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Ign http://linux.dropbox.com oneiric/main Translation-en_US Ign http://linux.dropbox.com oneiric/main Translation-en Fetched 273 B in 2s (91 B/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: GPG error: http://de.archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/oneiric/Release W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_main_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_universe_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/binary-i386/Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Some index files failed to download. They have been ignored, or old ones used instead. There are some questions with similar problems, but no one seems to get these "Range Not Satisfiable" errors. I do not use a proxy, and the network configuration should not have changed since it worked the last time.

    Read the article

  • sudo apt-get update errors

    - by Adrian Begi
    Here is what I get on my terminal when running sudo apt-get update errors. I dont know if the issue is from my sources.list or my proxy setup(have not made any changes to proxies). Thank you for any help in advanced. Ign http://security.ubuntu.com oneiric-security Release.gpg Ign http://security.ubuntu.com oneiric-security Release Ign http://security.ubuntu.com oneiric-security/main Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/main amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main TranslationIndex Ign http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Ign http://security.ubuntu.com oneiric-security/restricted TranslationIndex Ign http://security.ubuntu.com oneiric-security/universe TranslationIndex Err http://security.ubuntu.com oneiric-security/main Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse i386 Packages 404 Not Found [IP: 91.189.91.15 80] Ign http://security.ubuntu.com oneiric-security/main Translation-en_US Ign http://security.ubuntu.com oneiric-security/main Translation-en Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en_US Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en Ign http://security.ubuntu.com oneiric-security/restricted Translation-en_US Ign http://security.ubuntu.com oneiric-security/restricted Translation-en Ign http://security.ubuntu.com oneiric-security/universe Translation-en_US Ign http://security.ubuntu.com oneiric-security/universe Translation-en W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] E: Some index files failed to download. They have been ignored, or old ones used instead. HERE IS MY SOURCES.LIST # # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu oneiric-security main restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security main restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Easily use google maps, openstreet maps etc offline.

    - by samkea
    I did it and i am going to explain step by step. The explanatination may appear long but its simple if you follow. Note: All the softwares i have used are the latest and i have packaged them and provided them in the link below. I use Nokia N96 1) RootSign smartComGPS and install it on your phone(i havent provided the signer so that u wuld do some little work. i used Secman' rootsign). 2) Install Universal Maps Downloader, SmartCom OGF2 converter and OziExplorer 3.95.4s on my PC. a) UMD is used to download map tiles from any map source like googlemaps,opensourcemaps etc... and also combine the tiles into an image file like png,jpg,bmp etc... b) SmartCom OGF2 converter is used to convert the image file into a format usable on your mobile phone. c) OziExplorer will help you to calibrate the usable map file so that it can be used with GPS on your mobile phone without the use of internet. 3) Go to google maps or where u pick your maps and pan to the area of your interest. Zoom the map to at least 15 or 16 zoom level where you can see your area clearly and the streets. 4) copy this script in a notepad file and save it on your desktop: javascript:void(prompt('',gApplication.getMap().ge tCenter())); 5) Open the universal maps downloader. You will notice that you are required to add the: left longitude, right longitude,top latitude, bottom latitude. 6) On your map in google maps, doubleclick on the your prefered to most middle point. you will notice that the map will center in that area. 7) copy the script and paste it in the address bar then press enter. You will notice that a dialog with your (top latitude) and longitude respectively pops up. 8) copy the top latitude ONLY and paste it in the corresponding textbox in the UMD. 9) repeat steps 6-7 for the botton latitude. 10)repeat steps 6-7 for left longitude and right longitude too, but u have to copy the longitudes here. (***BTW record these points in the text file as they may be needed later in calibration) 11) Give the zoom level to the same zoom level that you prefered in google maps. 12) Dont forget to choose a path to save your files and under options set the proxy connection settings in UMD if you are using so. 13) Click on start and bingo! there you have your image tiles and a file with an extension .umd will be saved in the same folder. 14) On the UMD, go to tools, click on MapViewer and choose the .umd file. you will now see your map in one piece....and you will smile! 15) Still go to tools and click on map combiner. A dialog will popup for you to choose the .umd file and to enter the IMAGE file name. u can use another extension for the image file like png, jpg etc...i usually use png. 16) Combine.....bingo! there u go! u have an IMAGE file for your map. *I SUGGEST THAT CREATE A .BMP FILE and A .PNG file* 17) Close UMD and open SmartCom OGF2 converter. 18) Choose your .png image and create an ogf2 file. 19) Connect your phone to your PC in Mass Memory mode and transfer the file to the smartComGPS\Maps folder. 20) Now disconnect your phone and load smartComGPS. it will load the map and propt you to add a calibration point. Go ahead and add one calibration point with dummy coordinates. You will notice that it will add another file with extension .map in the smartComGPS\Maps folder. 21) Connect yiur ohone and copy that file and paste it in your working folder on your PC. Delete that .map file from the phone too because you are going to edit it from your PC and put it back. 22) Now Open the OziExplorer, go to file-->Load and Calibrate Map Image. 23) Choose the .bmp image and bingo! it will load with your map in the same zoom level. 24) Now you are going to calibrate. Use the MapView window and take the small box locater to all the 4 cornners of the map. You will notice that the map in the back ground moves to that area too. 25)On the right side, select the Point1 tab. Now you are in calibration mode. Now move the red box in mapview in the left upper corner to calibrate point1. 26) out of mapview go to the the left upper corner of the background map and choose poit (0,0) and your 1st calibration point. You will notice that these X,Y cordinated will be reflected in the Point1 image cordinates. 27) now go back to the text file where you saved your coordibates and enter the top latitude and the left longitude in the corresponding places. 28) Repeat steps 25-27 for point2,point3,point4 and click on save. Thats it, you have calibrated your image and you are about to finish. 29) Go to save and a dilaog which prompts you to save a .map file will poop up. Do save the map file in your working folder. 30) Right click that .map file and edit the filename in the .map file to remove the pc's directory structure. Eg. Change C\OziExplorer\data\Kampala.bmp to Kampala.ogf2. 31) Save the .map file in the smartComGPS\Maps folder on your phone. 32) now open smartComGPS on your phone and bingo! there is your map with GPS capability and in the same zoom level. 33) In smartComGPS options, choose connect and simulate. By now you should be smiling. Whoa! Hope i was of help. i case you get a problem, please inform me Below is the link to the software. regards. http://rapidshare.com/files/230296037/Utilities_Used.rar.html Ok, the Rapidshare files i posted are gone, so you will have to download as described in the solution. If you need more help, go here: http://www.dotsis.com/mobile_phone/sitemap/t-160491.html Some months later, someone else gave almost the same kind of solution here. http://www.dotsis.com/mobile_phone/sitemap/t-180123.html Note: the solutions were mean't to help view maps on Symbian phones, but i think now they ca even do for Windows Phones, iphones and others so read, extract what you want and use it. Hope it helps. Sam Kea

    Read the article

  • Automating Form Login

    - by Greg_Gutkin
    Introduction A common task in configuring a web application for proxying in Pagelet Producer is setting up form autologin. PP provides a wizard-like tool for detecting the login form fields, but this is usually only the first step in configuring this feature. If the generated configuration doesn't seem to work, some additional manual modifications will be needed to complete the setup. This article will try to guide you through this process while steering you away from common pitfalls. For the purposes of this article, let's assume the following characteristics about your environment: Web Application Base URL: http://host/app (configured as Resource Source URL in PP) Pagelet Producer Base URL: http://pp/pagelets Form Field Auto-Detection Form Autologin is configured in the PP Admin UI under resource_name/Autologin/Form Login. First, you'll enter the URL to the login form under "Login Form Identification". This will enable the admin wizard to connect to and display the login page. Caution: RedirectsMake sure the entered URL matches what you see in the browser's address bar, when the application login page is displayed. For example, even though you may be able to reach the login page by simply typing http://host/app, the URL you end up on may change to http://host/app/login via browser redirect(s).The second URL is the one you will want to use. Caution: External Login ServersThe login page may actually come from a different server than the application you are trying to proxy. For example, you may notice that the login page URL changes to http://hostB/appB. This is common when external SSO products are involved. There are two ways of dealing with this situation. One is to configure Pagelet Producer to participate in SSO. This approach is out of scope of this article and is discussed in a separate whitepaper (TODO add link). The second approach is to use the autologin feature to provide stored credentials to the SSO login form. Since the login form URL is not an extension of the application base URL (PP resource URL), you will need to add a new PP resource for the SSO server and configure the login form on that resource instead of the original application resource. One side benefit of this additional resource is that it can reused for other applications relying on the same SSO server for login. After entering the login page URL (make sure dropdown says "URL"), click "Automatically Detect Form Fields". This will bring up the web app's login page in a new browser window. Fill it out and submit it as you would normally. If everything goes right, Pagelet Producer will intercept the submitted values and fill out all the needed configuration data in the Admin UI. If the login form window doesn't close or configuration data doesn't get filled in, you may have not entered the login page URL correctly. Review the two cautionary notes above and make any necessary changes. If the form fields got filled automatically, it's time to save the configuration and test it out. If you can access a protected area of the backend application via a proxied PP URL without filling out its login form, then you are pretty much done with login form configuration. The only other step you will need to complete before declaring this aspect of configuration production ready is configuring form field source. You may skip to that section below. Manual Login Form Identification Let's take a closer look at Login Form Identification. This determines how Pagelet Producer recognizes login forms as such. URL The most efficient way of detecting login forms is by looking at the page URL. This method can only be used under the following conditions: Login page URL must be different from the post login application URLs. Login page URL must stay constant regardless of the path it takes to reach the page. For example, reaching the login page by going to the application base URL or to a specific protected URL must result in a redirect to the same login page URL (query string excluded). If only the query string parameters change, just leave out the query string from the configured login page URL. If either of these conditions is not fullfilled, you must switch to the RegEx approach below. RegEx If the login page URL is not uniform enough across all scenarios or is indistinguishable from other page locations, PP can be configured to recognize it by looking at the page markup itself. This is accomplished by changing the dropdown to "RegEx". If regular expressions scare you, take comfort from the fact that in most cases you won't need to enter any special regex characters. Let's look at an example: Say you have a login form that looks like <form id='loginForm' action='login?from=pageA' > <input id='user'> <input id='pass'> </form> Since this form has an id attribute, you can be reasonably sure that this login form can be uniquely identified across the web application by this snippet: "id='loginForm'". (Unless, of course your backend web application contains login forms to other apps). Since no wildcards are needed to find this snippet, you can just enter it as is into the RegEx field - no special regular expression characters needed! If the web developer who created the form wasn't kind enough to provide a unique id, you will need to look for other snippets of the page to uniquely identify it. It could be the action URL, an input field id, or some other markup fragment. You should abstain from using UI text as an identifier it may change in translated versions of the page and prevent the login page logic from working for international users. You may need to turn to regular expression wildcard syntax if no simple matches work. For more information on regular expression, refer to the Resources section. Form Submit Location Now we'll look at the form submit location. If the captured URL contains query string parameters that will likely change from one form submission to the next, you will need to change its type to RegEx. This type will tell Pagelet Producer to parse the login page for the action URL and submit to the value found. The regular expression needs to point at the actual action URL with its first grouping expression. Taking the example form definition above, the form submit location regex would be: action='(.*?)' The parentheses are used to identify the actual action URL, while the rest of the expression provides the context for finding it. Expression .*? is a so-called reluctant wildcard that matches any character excluding the single quote that follows. See Resources section below for further information on regular expressions. Manual Form Field Detection If the Admin UI form field detection wizard fails to populate login form configuration page, you will have to enter the fields by hand. Use a built-in browser developer tool or addon (e.g. Firebug) to inspect the form element and its children input elements. For each input element (including hidden elements), create an entry under Form Fields. Change its Source according to the next section. Form Field Source Change the source of any of the fields not exposed to the users of the login form (i.e. hidden fields) to "Generated". This means Pagelet Producer will just use the values returned by the web app rather than supplying values it stored. For fields that contain sensitive data or vary from user to user (e.g. username & password), change the source to User (Credential) Vault. Logging Support To help you troubleshoot you autologin configuration, PP provides some useful logging support. To turn on detailed logging for the autologin feature, navigate to Settings in Admin UI. Under Logging, change the log level for AutoLogin to Finest. Known Limitations Autologin feature may not work as expected if login form fields (not just the values, but the DOM elements themselves) are generated dynamically by client side JavaScript. Resources RegEx RegEx Reference from Java RegEx Test Tool

    Read the article

  • top Tweets SOA Partner Community – September 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity OracleBlogs ?Oracle SOA Suite for healthcare integration Dashboard http://ow.ly/1mcJvp SOA Community ?Lost in Translation &ndash; Common Mistakes Interpreting Patterns &ndash; Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service… ServiceTechSymposium Matthias Zieger, Accenture just added to the agenda to co-present: "Service Modeling & BPM Business Value Patterns" http://ow.ly/ddu7A ServiceTechSymposium ?Newly updated session title and abstract: "Big Data and its impact on SOA", by Demed L'Her, Oracle. http://ow.ly/diOq2 Deepak Arora ?To PaaS or SaaS - the latest discussions with customers using SOA Suite - what are your thoughts #soa #soacommunity SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/08/28/top-tweets-soa-partner-community-august-2012/ … #soacommunity Sandor Nieuwenhuijs Checkout the BeNeLux Architectural Networking Event during Oracle Open World - meet your peers and the experts http://www.ddg-servicecenter.com/networkmanager/oow/architect/default.aspx … SOA Community ?top Tweets SOA Partner Community &ndash; August 2012 http://wp.me/p10C8u-uf SOA Community ?Follow SOA Community on facebook http://www.facebook.com/soacommunity #soacommunity SOA Community ?New Service to promote Your SOA & BPM events at http://oracle.com/events for SOA & BPM Specialized Partners Only! #soacommunity #opn #oracle Jan van Zoggel ?Hotel check, flight check, overview of sessions to visit check http://jvzoggel.wordpress.com/2012/08/27/soa-cloud-servicetech-symposium/ … I'm ready for SOA, Cloud & Service Technology Symposium SOA Community SOA & BPM Specialized Partners Only! New Service to Promote Your SOA & BPM Events at http://oracle.com/events http://wp.me/p10C8u-sH SOA Community Call for content for the next community newsletter. Do you want to publish your success & best practice? Send it @soacommunity #soacommunity SOA Community SOA Adoption in the Brazilian Ministry of Health - Case Study by Ricardo Puttini, University of Brasilia @ SOA, Cloud & Service… Jan van Zoggel ?Just registered for the 5th International SOA, Cloud & Service Technology Symposium in London. Looking forward to it. http://www.servicetechsymposium.com/ OTNArchBeat ?Want to prepare for Oracle SOA Specialization? @t_winterberg offers a suggestion. http://pub.vitrue.com/5Hqu OTNArchBeat ?Oracle BPM enable BAM | @deltalounge http://pub.vitrue.com/BCwj SOA Community Presentations & Training material OFM Summer Camps & Impressions & Feedback http://wp.me/p10C8u-sF Emiel Paasschens Nice! Pdf document on how to use a #Oracle #SOA Suite Domain Value Map (DVM) in the OSB: http://bit.ly/RzyS9w #yam OracleBlogs ?Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL http://ow.ly/1m4lz7 demed ?Free VIP pass for @techsymp if you are in London Sep. 24-25. Be the first one to retweet this and I'll DM you details! http://www.servicetechsymposium.com/speaker_bios.php?id=demed_lher … Jan van Zoggel blogpost: Oracle Service Bus duplicate message check using Oracle Coherence caching http://jvzoggel.wordpress.com/2012/08/20/osb-duplicate-message-with-coherence/ … OTNArchBeat ?Oracle Service Bus duplicate message check using Coherence | @jvzoggel http://pub.vitrue.com/ckY8 Oracle UPK & Tutor Synaptis and Oracle Present: Leveraging UPK Throughout the Project Lifecycle: Leveraging UPK throughout the Proj... http://bit.ly/OS2Rbg Rolando Carrasco ?New entry @ oracleradio http://bit.ly/SEvwwS @soacommunity @oracleace How to identify duplicated messages on Oracle SOA SUITE? SOA Community ?Business Driven Development (BDD) Demo Now Available! http://wp.me/p10C8u-sf OTNArchBeat ?Installing Oracle SOA Suite10g on Oracle Enterprise Linux | @lonnekedikmans http://pub.vitrue.com/BEyD OTNArchBeat ?Best practices for Oracle real-time data integration | Frank Ohlhorst http://pub.vitrue.com/1fH1 ServiceTechSymposium ?New OTN podcast featuring speakers Thomas Erl, Tim Hall and Demed L’Her just published. Tune into 1st 3 parts here: http://ow.ly/d1RRn OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 4 of 4 - Best selling SOA author Thomas Erl talks about the latest title... http://ow.ly/1m0txY SOA Community Win a free conference pass for the SOA, Cloud + Service Technology Symposium &ndash; become a soacommunity facebook fan!… Lonneke Dikmans VENNSTER BLOG: Installing Oracle SOA Suite10g on Oracle Enterpris... http://blog.vennster.nl/2012/08/installing-oracle-soa-suite-10g-on.html?spref=tw … PeterPaul vande Beek published a blog on exporting Oracle #BPM metrics to a #DWH http://www.deltalounge.net/wpress/2012/08/export-oracle-bpm-metrics-to-a-data-warehouse/ … #soacommunity SOA Community ?Do you follow us on facebook http://www.facebook.com/soacommunity #soacommunity C2B2 Consulting ?Cloud-based Enterprise Architecture by Steve Millidge, C2B2 Consulting @ SOA, Cloud &amp; Service … http://wp.me/p10C8u-sv via @soacommunity Gertjan van het Hof Storing SCA Metadata in the Oracle Metadata Services Repository http://www.oracle.com/technetwork/articles/soa/fonnegra-storing-sca-metadata-1715004.html?msgid=3-6903117805 … arjankramer ?Encrypted OSB Service account passwords http://dlvr.it/20hbNV Richard van Tilborg BPM the Battle http://lnkd.in/yFAJaW OTNArchBeat Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL | @RahejaRajesh http://pub.vitrue.com/YDCD SOA Proactive ?Webcast: Introduction to SOA Human Workflow, 8/23, 10 AM EDT. Register @ http://bit.ly/Nx77sY Lucas Jellema ?Programmatically admnistration of OSB using JXM & MBeans. Interesting example is given in https://blogs.oracle.com/ateamsoab2b/entry/automatic_disabling_proxy_service_when … orclateamsoa ?A-Team Blog #ateam: Automatically Disable Proxy Service to avoid overloading OSB http://ow.ly/1lXGKV Atul_Kumar ?Oracle Enterprise Gateway – OEG 11gR1 (11.1.1.*) for beginners http://goo.gl/fb/EJboE Estafet Limited Advanced SOA Boot camp @soacommunity in Munich was excellent.@wlscommunity Learnt a lot and liked the format. SOA Community Oracle Fusion Applications Design Patterns Now Available For Developers by Ultan O'Broin http://wp.me/p10C8u-sd SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • nginx problem accessing virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • Prevent your Silverlight XAP file from caching in your browser.

    - by mbcrump
    If you work with Silverlight daily then you have run into this problem. Your XAP file has been cached in your browser and you have to empty your browser cache to resolve it. If your using Google Chrome then you typically do the following: Go to Options –> Clear Browsing History –> Empty the Cache and finally click Clear Browsing data. As you can see, this is a lot of unnecessary steps. It is even worse when you have a customer that says, “I can’t see the new features you just implemented!” and you realize it’s a cached xap problem.  I have been struggling with a way to prevent my XAP file from caching inside of a browser for a while now and decided to implement the following solution. If the Visual Studio Debugger is attached then add a unique query string to the source param to force the XAP file to be refreshed. If the Visual Studio Debugger is not attached then add the source param as Visual Studio generates it. This is also in case I forget to remove the above code in my production environment. I want the ASP.NET code to be inline with my .ASPX page. (I do not want a separate code behind .cs page or .vb page attached to the .aspx page.) Below is an example of the hosting code generated when you create a new Silverlight project. As a quick refresher, the hard coded param name = “source” specifies the location of your XAP file.  <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/SilverlightApplication2.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We are going to use a little bit of inline ASP.NET to generate the param name = source dynamically to prevent the XAP file from caching. Lets look at the completed solution: <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) //Debugger Attached - Refresh the XAP file. param = "<param name=\"source\" value=\"" + strSourceFile + "?" + DateTime.Now.Ticks + "\" />"; else { //Production Mode param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; } Response.Write(param); %> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We add the location to our XAP file to strSourceFile and if the debugger is attached then it will append DateTime.Now.Ticks to the XAP file source and force the browser to download the .XAP. If you view the page source of your Silverlight Application then you can verify it worked properly by looking at the param name = “source” tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap?634299001187160148" /> If the debugger is not attached then it will use the standard source tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap"/> At this point you may be asking, How do I prevent my XAP file from being cached on my production app? Well, you have two easy options: 1) I really don’t recommend this approach but you can force the XAP to be refreshed everytime with the following code snippet.  <param name="source" value="ClientBin/SilverlightApplication2.xap?<%=Guid.NewGuid().ToString() %>"/> NOTE: You could also substitute the “Guid.NewGuid().ToString() for anything that create a random field. (I used DateTime.Now.Ticks earlier). 2) Another solution that I like even better involves checking the XAP Creation Date and appending it to the param name = source. This method was described by Lars Holm Jenson. <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; else { string xappath = HttpContext.Current.Server.MapPath(@"") + @"\" + strSourceFile; DateTime xapCreationDate = System.IO.File.GetLastWriteTime(xappath); param = "<param name=\"source\" value=\"" + strSourceFile + "?ignore=" + xapCreationDate.ToString() + "\" />"; } Response.Write(param); %> As you can see, this problem has been solved. It will work with all web browsers and stubborn proxy servers that are caching your .XAP. If you enjoyed this article then check out my blog for others like this. You may also want to subscribe to my blog or follow me on Twitter.   Subscribe to my feed

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • CodePlex Daily Summary for Monday, August 18, 2014

    CodePlex Daily Summary for Monday, August 18, 2014Popular ReleasesMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...WallSwitch: WallSwitch 1.2.5: Version 1.2.5 Changes: Added support for sequential order in collage mode. Added option to display multiple images per switch in collage mode. Fixed bug where border width wasn't being loaded properly, and was reverting to default values. Fixed bug where sequential order was repeating images on multiple monitors. Decreased likelihood of random images being repeated.OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...New Projectsballmon: ballmonExchange Database Recovery With and Without Log Files is Possible: This segments giving an overview of Exchange Server transaction log files. It describes process how users can recover their database with & without log filesFabs.Net: Ego tatmini ve gelisme amaçli yaptigim bir projedir.JacoChat: JacoChat is a simple chatting interface that uses my personal webserver as a "wall" for people to chat on.ManagedWin32: ManagedWin32 is a library that exposes the Win32 API to .NET applications.Open XML Extensions: The project provides additions to the Open XML SDK and related projects (e.g., PowerTools for Open XML), starting with MemoryStreams for Open XML Documents.orntic: Project for insurace companyTBOX: The Treasure Box Library: TBOX is a mutli-platform c library for unix, windows, mac, ios, android, etc. It includes asio, stream, container, algorithm, xml and other library modules.WeatherTS: Typescript weather application.?????@/????: ??????????????:????,????,????,???????,????????,??????:????????,?????! ?????????: ????????????????????,????????:??、??、???,?????????????????????! ????-??: ??????????????,????,???????????????。

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >