Search Results

Search found 1941 results on 78 pages for 'infrastructure'.

Page 64/78 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • java.net.SocketException: Software caused connection abort: recv failed; Causes and cures?

    - by IVR Avenger
    Hi, all. I've got an application running on Apache Tomcat 5.5 on a Win2k3 VM. The application serves up XML to be consumed by some telephony appliances as part of our IVR infrastructure. The application, in turn, receives its information from a handful of SOAP services. This morning, the SOAP services were timing out intermittently, causing all sorts of Exceptions. Once these stopped, I noticed that our application was still performing very slowly, in that it took it a long time to render and deliver pages. This sluggishness was noticed both on the appliances that consume the Tomcat output, and from a simple test of requesting some static documents from my web browser. Restarting Tomcat immediately resolved the issue. Cracking open the localhost log, I see a ton of these errors, right up until I restarted Tomcat: WARNING: Exception thrown whilst processing POSTed parameters java.net.SocketException: Software caused connection abort: recv failed After a big of Googling, my working theory is that the SOAP issue caused my users to get errors, which caused them to make more requests, which put an increased load on the application. This caused it to run out of available sockets to handle incoming requests. So, here's my quandary: 1. Is this a valid hypothesis, or am I just in over my head with HTTP and Tomcat? 2. If this is a valid hypothesis, is there a way to increase the size of the "socket queue", so that this doesn't happen in the future? Thanks! IVR Avenger

    Read the article

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • SEO: A whois server that work for .SE domains?

    - by Niels Bosma
    I'm developing a small domain checker and I can't get .SE to work: public string Lookup(string domain, RecordType recordType, SeoToolsSettings.Tld tld) { TcpClient tcp = new TcpClient(); tcp.Connect(tld.WhoIsServer, 43); string strDomain = recordType.ToString() + " " + domain + "\r\n"; byte[] bytDomain = Encoding.ASCII.GetBytes(strDomain.ToCharArray()); Stream s = tcp.GetStream(); s.Write(bytDomain, 0, strDomain.Length); StreamReader sr = new StreamReader(tcp.GetStream(), Encoding.ASCII); string strLine = ""; StringBuilder builder = new StringBuilder(); while (null != (strLine = sr.ReadLine())) { builder.AppendLine(strLine); } tcp.Close(); if (tld.WhoIsDelayMs > 0) System.Threading.Thread.Sleep(tld.WhoIsDelayMs); return builder.ToString(); } I've tried whois servers whois.nic-se.se and whois.iis.se put I keep getting: # Copyright (c) 1997- .SE (The Internet Infrastructure Foundation). # All rights reserved. # The information obtained through searches, or otherwise, is protected # by the Swedish Copyright Act (1960:729) and international conventions. # It is also subject to database protection according to the Swedish # Copyright Act. # Any use of this material to target advertising or # similar activities is forbidden and will be prosecuted. # If any of the information below is transferred to a third # party, it must be done in its entirety. This server must # not be used as a backend for a search engine. # Result of search for registered domain names under # the .SE top level domain. # The data is in the UTF-8 character set and the result is # printed with eight bits. "domain google.se" not found. Edit: I've tried changing to UTF8 with no other result. When I try using whois from sysinternals I get the correct result, but not with my code, not even using SE.whois-servers.net. /Niels

    Read the article

  • Is there a way to avoid spaghetti code over the years?

    - by Yoni Roit
    I've had several programming jobs. Each one with 20-50 developers, project going on for 3-5 years. Every time it's the same. Some programmers are bright, some are average. Everyone has their CS degree, everyone read design patterns. Intentions are good, people are trying hard to write good code but still after a couple of years the code turns into spaghetti. Changes in module A suddenly break module B. There are always these parts of code that no one can understand except for the person who wrote it. Changing infrastructure is impossible and backwards compatibility issues prevent good features to get in. Half of the time you just want to rewrite everything from scratch. And people more experienced than me treat this as normal. Is it? Does it have to be? What can I do to avoid this or should I accept it as a fact of life? Edit: Guys, I am impressed with the amount and quality of responses here. This site and its community rock!

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • razor websites not working and all dlls are present

    - by Michael Tot Korsgaard
    I've uploaded a .cshtml website to a surftown server, and I got some problems running the code. But I have a problem with it running the Razor code. This is how the page renders:(Default.cshtml) I've already checked for internal communication problems. And this is my result: But then why isn't it working, and how can I fix it? I've heard that it can be a problem with views but how whould I fix this if that's the case? My websites folder tree: (And some files too) App_Code App_Data packages Microsoft.AspNet.Razor.2.0.20710.0 Microsoft.Asp.Net.WebPages.2.0.20710.0 Microsoft.Asp.Net.WebPages.Administration.2.0.20710.0 Microsoft.Asp.Net.WebPages.Data.2.0.20710.0 Microsoft.Asp.Net.WebPages.WebData.2.0.20710.0 Microsoft.Web.Infrastructure.1.0.0.0 NuGet.Core.1.6.2 bin packages jQuery.2.0.3 Content Scripts Tools Microsoft.AspNet.Mvc.4.0.30506.0 lib net40 Microsoft.AspNet.Razor.2.0.30506.0 lib net40 Microsoft.AspNet.WebPages.2.0.30506.0 lib net40 Pages Chapters Read.cshtml Edit Move Chapter.cshtml Entry.cshtml Entries EnterEntry.cshtml EnterNote.cshtml Login Login.cshtml Search Result.cshtml Scripts Addons TinyMCE Styles CSS Views _Layout.cshtml Default.cshtml My web.config file looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".cshtml" type="System.Web.WebPages.Razor.RazorBuildProvider, System.Web.WebPages.Razor"/> </buildProviders> <assemblies> <add assembly="System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> </system.web> <connectionStrings> <add connectionString="database connection" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> EDIT: Is it a problem that all my files are .cshtml?

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Previous layer not showing

    - by meep
    Hello I got a menu which can foldout and show some content based on the menu item. See demo: http://arcticbusinessnetwork.com.web18.curanetserver.dk/home.aspx If I hover the menu it works great, but if I click on "Raw Material", I can not access the PREVIOUS tab (Infrastructure) while the others fade in great. How can this be? This is the javascript <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> <script type="text/javascript" src="/js/jquery.hoverIntent.min.js"></script> <script type="text/javascript"> $(document).ready(function () { function megaHoverOver() { $(this).find(".menu_content").stop().fadeTo('fast', 1).show(); } function megaHoverOut() { $(this).find(".menu_content").stop().fadeTo('fast', 0, function () { $(this).hide(); }); } var config = { sensitivity: 2, interval: 50, over: megaHoverOver, timeout: 300, out: megaHoverOut }; $("#menu ul li").not(".parenttocurrent").not(".current").find(".menu_content").css({ 'opacity': '0' }); $("#menu ul li").not(".parenttocurrent").not(".current").hoverIntent(config); }); </script>

    Read the article

  • CORBA on MacOS X (Cocoa)

    - by user8472
    I am currently looking into different ways to support distributed model objects (i.e., a computational model that runs on several different computers) in a project that initially focuses on MacOS X (using Cocoa). As far as I know there is the possibility to use the class cluster around NSProxy. But there also seem to be implementations of CORBA around with Objective-C support. At a later time there may be the need to also support/include Windows machines. In that case I would need to use something like Gnustep on the Windows side (which may be an option, if it works well) or come up with a combination of both technologies. Or write something manually (which is, of course, the least desirable option). My questions are: If you have experience with both technologies (Cocoa native infrastructure vs. CORBA) can you point out some key features/issues of either approach? Is it possible to use Gnustep with Cocoa in the way explained above? Is it possible (and reasonably feasible, i.e. simpler than writing a network layer manually) to communicate among all MacOS clients using Cocoa's technology and with Windows clients through CORBA?

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • UDP sockets in ad hoc network (Ubuntu 9.10)

    - by Ekhiotz
    Hi! I am using BSD sockets in Ubuntu 9.10 to send UDP packets in broadcast with the following code: sock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP); //sock_fd=socket(AF_INET,SOCK_DGRAM,0); receiver_addr.sin_family = PF_INET; //does not send with broadcast in ad hoc receiver_addr.sin_addr.s_addr = htonl(INADDR_BROADCAST); inet_aton("169.254.255.255",&receiver_addr.sin_addr); receiver_addr.sin_port = htons(port); int broadcast = 1; // this call is what allows broadcast packets to be sent: if (setsockopt(sock_fd, SOL_SOCKET, SO_BROADCAST, &broadcast, sizeof broadcast) == -1) { perror("setsockopt (SO_BROADCAST)"); exit(1); } ret=sendto(sock_fd, packet, size, 0,(struct sockaddr*)&receiver_addr,sizeof(receiver_addr)); Note that is not all the code, it is only to have an idea. The program sends all the data with INADDR_BROADCAST if I am connected to an infrastructure wireless network. However, if my laptop is connected to an ad-hoc network, it is able to receive all the data, but not to send it. I have solved the problem using the 169.254.255.255 broadcast address, but I would like to know what is going on. Thank you in advance!

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

  • Visual Studio and .NET programming

    - by Vit
    Hi, I just want to ask wheather I am right or not about .NET. So, .NET is new framework that enables you to easily implement new and old windows functions. It is similiar to java in the way that its also compiled into "bytecode", but its name is Common Language Infrastructure, or CLI. This language is interpreted by .NET Framework, so code generated by programming using .NET cannot be executed directly by CPU. Now, few languages can be compiled to CLI. First, it was Microsoft-developed C#, than J#, C++ others. I suspect that this is in general right, at least I hope I understand it right. But, what I am still missing is, can you write to machine code compiled code in C#? And, if using Visual Studio 2005, when I select Win32 project, it is compiled into machine code, so only thing you need to run this apps are windows dynamic-link libraries, since static libraries code is implemented into app durink linking phase. And those dynamic-link libraries are implemented in every windows installation, or provided by DirectX installations. But when I select CLR in Visual Studio 2005, than app is compiled into CLI code, and it first executes .NET framework, and than .NET framework executes that program, since its not in machine code. So, I am right? I ask becouse you can read these infos on the internet, but I have noone to tell me wheather I understand it right or not. Thanks.

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • redirecting the root domain - SEO and other issues, need some guidance!

    - by Jim Sp
    I'm not familiar with some of these forwarding methods and I need help. My issue is this: I have a site hosted on discountasp.net. My domain was registered through 1&1 and I redirected the DNS to what discountasp.net wanted. So when a user types www.mydomain.com, he/she sees the ASP.NET site hosted on discountasp.net, which is all fine My main page is Index.aspx, I really suck at html page design and I don't have time or the talent to fiddle with it (or money to get it done by a pro). The rest of the pages are fine. I want to use a good theme from tumblr or bloggr - one of the blog sites and create a page that I want to use as the first page - directly on blogger or tumblr - say yyy.blogspot.com (I have many reasons, so for now please don't bash my decision - let's just say that's what I want). That means when a user types www.mydomain.com, it should redirect it to the blogger or tumblr page. Everything else stays the sme - the links on the blogger page will say www.mydomain.com/xxxx and show up what's on the hosted website. I have setup the IIS rewrite rules etc. etc. so that all works just fine The bottom line is I want to show an external site's web page as my root page. I suppose I'm struggling to even explain what I want! I can of course do a response.redirect on the Index.aspx page - which is the simplest way to manage this, but the big question is will this hurt SEO in some way? If not, that would be what I do and leave the rest of the infrastructure intact (I have already done this to test and it works fine) Thank you very much j

    Read the article

  • problem with phpMyAdmin advanced features

    - by typoknig
    Hi all, I am having trouble putting the final touches on my MySQL/Apache/phpMyAdmin install on a Windows XP system. I am trying to get rid of all the error message in phpMyAdmin and I have gotten rid of all of them except the ones related to "advanced features." The exact error message I have is : The additional features for working with linked tables have been deactivated. To find out why click here. I have read up on the cause of the errors but I must be missing something because I still cannot get the warning to go away. Here is what I have done: Created a linked-tables infrastructure (default name "phpmyadmin") per the phpMyAdmin instructions and enabled "pmadb" in my "config.inc.php" file. Specified (enabled) the table names in my "config.inc.php" file (there are 9 tables total). Created a "controluser" and granted only Select privilages per phpMyAdmin instructions Adjusted "controluser" pma and "controlpass" pmapass in "config.inc.php" file From what I can see these are all the instruction phpMyAdmin gives on this subject, and I am unable to locate any tutorials on the specifics of "advanced features" in phpMyAdmin. Any help would be appreciated, and be gentle, this is my first go with MySQL/phpMyAdmin

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >