Search Results

Search found 13070 results on 523 pages for 'simply tom'.

Page 13/523 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • Linq find differences in two lists

    - by Salo
    I have two list of members like this: Before: Peter, Ken, Julia, Tom After: Peter, Robert, Julia, Tom As you can see, Ken is is out and Robert is in. What I want is to detect the changes. I want a list of what has changed in both lists. How can linq help me?

    Read the article

  • how to bind objects to gridview

    - by prince23
    hi, i am calling an webservice where my webservice is returing an object( which contains data like empid name 1 kiran 2 mannu 3 tom WebApplication1.DBLayer.Employee objEMP = ab.GetJobInfo(); now my objEMP has this collection of data empid name 1 kiran 2 mannu 3 tom how can i convert this object into( datatable or LIst) and bind to gridview thanks in advance

    Read the article

  • Java delimiter reader

    - by newbieprogrammer
    I have a colon-delimited text file containing grouped, related data. The People group contains people's names followed by their ages, separated by colons. How can I parse the text and group people according to their ages? The structure is as follows: Group.txt Age:10:20:30:40: Group:G1:10:G2:30:G3:20:G4:40: People:Jack:10:Tom:30:Dick:20:Harry:10:Paul:10:Peter:20: People:Mary:20:Lance:10: And I want to display something like this: G1 Jack Harry Paul Lance G2 Dick Peter Marry G3 Tom G4

    Read the article

  • why this left join query failed to load all the data in left table ?

    - by lzyy
    users table +-----+-----------+ | id | username | +-----+-----------+ | 1 | tom | | 2 | jelly | | 3 | foo | | 4 | bar | +-----+-----------+ groups table +----+---------+-----------------------------+ | id | user_id | title | +----+---------+-----------------------------+ | 2 | 1 | title 1 | | 4 | 1 | title 2 | +----+---------+-----------------------------+ the query SELECT users.username,users.id,count(groups.title) as group_count FROM users LEFT JOIN groups ON users.id = groups.user_id result +----------+----+-------------+ | username | id | group_count | +----------+----+-------------+ | tom | 1 | 2 | +----------+----+-------------+ where is the rest users' info? the result is the same as inner join , shouldn't left join return all left table's data? PS:I'm using mysql

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • MSBuild: building website using AspNetCompiler - adding references?

    - by Tom Morgan
    Hi, I'm attempting to build a ASP.NET website using MSBuild - specifically the AspNetCompiler tag. I know that, for my project, I need to add some references. Within Visual Studio I have several references, one is a project reference and the others are some DLLS (AjaxControlToolkit etc). I'm happy not referencing the project and referencing the DLL instead - however I just can't work out how to add a reference. I've looked up and down and this is what I've found so far: <Target Name = "PrecompileWeb"> <AspNetCompiler VirtualPath = "DeployTemp" PhysicalPath = "D:\AutoBuild\CruiseControl\Projects\Websites\MyCompany\2.0.0\WorkingDirectory\VSS" TargetPath = "D:\AutoBuild\CruiseControl\Projects\Websites\MyCompany\2.0.0\PreCompiled" Force = "true" Debug = "true" Updateable = "true"/> </Target> Also - I've picked up this bit of code from around the web somewhere, which I thought might help: <ItemGroup> <Reference Include="My.Web.DataEngine, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>D:\AutoBuild\CruiseControl\Projects\Components\My.Web.DataEngine\bin\Debug\My.Web.DataEngine.dll</HintPath> </Reference> </ItemGroup> What I want to do is add a attribute to the AspNetCompiler tag, something like: References="@(Reference)" but MSBuild isn't very happy about this. I've been a bit stuck in not being able to find decent references on doing this anywhere: so I'd really apprechiate some pointers or reference material etc. (or just the answer!) Thanks for you help. -tom

    Read the article

  • WiX: Prevent 32-bit installer from running on 64-bit Windows

    - by Tom the Junglist
    Hi everyone, Due to user confusion, our app requires separate installers for 32-bit and 64-bit versions of Windows. While the 32-bit installer runs fine on win64, it has the potential to create support headaches and we would like to prevent this from happening. I want to prevent the 32-bit MSI installer from running on 64-bit Windows machines. To that end I have the following condition: <Condition Message="You are attempting to run the 32-bit installer on a 64-bit version of Windows."> <![CDATA[Msix64 AND (NOT Win64)]]> </Condition> With the Win64 defined like this: <?if $(var.Platform) = "x64"?> <?define PlatformString = "64-bit"?> <?define Win64 ?> <?else?> <?define PlatformString = "32-bit"?> <?endif?> Thing is, I can't get this check to work right. Either it fires all the time, or none of the time. The goal is to check presence of the run-time msix64 variable against the compile-time Win64 variable and throw an error if these don't line up, but the logic is not working how I intend it to. Has anyone come up with a better solution? Thanks! Tom

    Read the article

  • iPhone Safari does not auto scale back down on portrait->landscape->portrait

    - by Tom
    Hi, I have a very simple HTML page with this META tag for the iPhone: <meta name="viewport" content="height=device-height,width=device-width,initial-scale=1.0,user-scalable=no" /> When the page loads in portrait mode it looks fine and the width fits the screen. When I rotate the iPhone to landscape mode the web page is auto resized to fit the landscape width. Good, this is what I want. But when I rotate back to landscape, the page is not resized back to fit the portrait width like it was before. It remains in the landscape width. I want the iPhone to set it back to the right width automatically, just like it did for the landscape mode. I don't think this should involve orientation listeners because it is all done automatically and I don't have any special styling for the different modes. Why doesn't the iPhone resize the web page back in portrait mode? How do I fix this? Thanks! Tom.

    Read the article

  • Fade out when user leaves page — jquery

    - by Tom Julian Hume
    I have some simple page transitions that fade in, once the user has landed. However, I'm also trying to make the same page fade out, when the user leaves. I have found a few solutions, but they appeared to use delay(). Are there any that don't? Thanks for any help, (I'm new to this, mind!) Tom :) I am currently using this code: $(document).ready( function(){ $( 'body' ).fadeIn(2000); $('#stop').click(function (e) { e.preventDefault(); }); $('#clients').click(function() { $("#projectinfo").slideUp('slow'); $("#us").fadeOut('slow'); $("ul").fadeToggle('slow'); }); $('#information').click(function() { $("#projectinfo").slideUp('slow'); $("ul").fadeOut('slow'); $("#us").fadeToggle('slow'); }); $('#question').click(function() { $("#projectinfo").slideToggle('slow'); }); $('#question').hover(function() { $("#projectinfo").slideToggle('slow'); }); $("a").click(function(event){ event.preventDefault(); linkLocation = this.href; $("body").fadeOut(1000, redirectPage); }); function redirectPage() { });

    Read the article

  • Evaluation of environment variables in command run by Java's Runtime.exec()

    - by Tom Duckering
    Hi, I have a scenarios where I have a Java "agent" that runs on a couple of platforms (specifically Windows, Solaris & AIX). I'd like to factor out the differences in filesystem structure by using environment variables in the command line I execute. As far as I can tell there is no way to get the Runtime.exec() method to resolve/evaluate any environment variables referenced in the command String (or array of Strings). I know that if push comes to shove I can write some code to pre-process the command String(s) and resolve enviroment variables by hand (using getEnv() etc). However I'm wondering if there is a smarter way to do this since I'm sure I'm not the only person wanting to do this and I'm sure there are pitfalls in "knocking up" my own implementation. Your guidance and suggestions are most welcome. edit: I would like to refer to environment variables in the command string using some consistent notation such as $VAR and/or %VAR%. Not fussed which. edit: To be clear I'd like to be able to execute a command such as: perl $SCRIPT_ROOT/somePerlScript.pl args on Windows and Unix hosts using Runtime.exec(). I specify the command in config file that describes a list of jobs to run and it has to be able to work cross platform, hence my thought that an environment variable would be useful to factor out the filesystem differences (/home/username/scripts vs C:\foo\scripts). Hope that helps clarify it. Thanks. Tom

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Flex service in debug

    - by Tom
    Hello everybody, I am trying to learn the new services method in flex 4. but i can´t get it work. A test oparation near the service in flash builder 4 works. But when i run the code i get NetConnection.Call.Failed: HTTP: Failed. Does somebody knows what the problem can be? Tom CODE: PHP <?php class AuthService { public function login($username, $password) { return 'ok'; } public function logout() { return true; } } ?> FLEX <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:authservice="services.authservice.*"> <fx:Script> <![CDATA[ import mx.controls.Alert; protected function button_clickHandler(event:MouseEvent):void { loginResult.token = authService.login(username, password); } ]]> </fx:Script> <fx:Declarations> <s:CallResponder id="loginResult"/> <authservice:AuthService id="authService" fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)" showBusyCursor="true"/> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:Button x="97" y="193" label="Button" id="button" click="button_clickHandler(event)"/> <s:TextInput x="91" y="87" id="username"/> <s:TextInput x="97" y="117" id="password"/> </s:Application

    Read the article

  • Java NIO (Netty): How does Encryption or GZIPping work in theory (with filters)

    - by Tom
    Hello Experts, i would be very thankfull if you can explain to me, how in theory the "Interceptor/Filter" Pattern in ByteStreams (over Sockets/Channels) work (in Asynchronous IO with netty) in regard to encryption or compression of data. Given I have a Filter that does GZIPPING. How is this internally implemented? Does the Filter "collect" so many bytes form the channel, that this is a usefull number of bytes that can then be en/decoded? What is in general the minimal "blocksize(data to encode/decode in a chunk)" of socket based gzipping? Does this "blocksize" have to be negotiated in advance between server and client? What happens if the client does not send enough data to "fill" the blocksize (due to a network conquestion) but does not close the connection. Does this mean the other side will simply wait until it gets enough bytes to decode or until a timeout occoures...How is the Filter pattern the applied? The compression filter will de/compress the blocksize of bytes and then store them again in the same buffer would (in the case of netty) i normally be using the ChannelHanlderContext to pass the de/encoded data to the next filter?... Any explanations/links/tutorials (for beginners;-) will be very much appreciated to help me understand how for example encryption/compressing are implemented in socket based communication with filters/interceptor pattern. thank you very much tom

    Read the article

  • Majordomo/Mailman - Yahoo/Google Groups

    - by tom smith
    Hi. Not an app question, but thought that I might ask here anyway! The responses might help someone. I've got an app where we're going to be dealing with 5000-10000 people in a group/pool. Periodically, different subsets of people will break off in their own group. I'm looking for thoughts on how to manage/approach this situation. (For now, all of this is being managed on a few servers in the garage, with dynamic IP) I've looked into the Yahoo/google groups, and they seem to be reasonable. The primary issue that I see with this approach, is that I don't have a good way of quickly/easily alowing a subset of the group to form their own group for a given project. This kind of function is critical. The upside to this though, I wouldn't have to really set anything up. And the hosted groups could send emails to the user all day along, without running into cap/bandwidth limits The other approach is a managed list, like mailman/majordomo. This approach appears to be more flexible, and looks like ti could be modified to handle the quick creation of lists, allowing users to quickly be assigned to different lists on the fly.. The downside, I'd have to run my own Mailman/Majordomo instance, as well as the associated mail server. Or I could look at possibly using one of the hosted service.. But this is for a project on the cheap, so we're really trying to keep costs down. Thoughts/Comments/Pointers would be greatly appreciated. Thanks in advance -tom

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • Entity framework memory leak after detaching newly created object

    - by Tom Peplow
    Hi, Here's a test: WeakReference ref1; WeakReference ref2; TestRepositoryEntitiesContainer context; int i = 0; using (context = GetContext<TestRepositoryEntitiesContainer>()) { context.ObjectMaterialized += (o, s) => i++; var item = context.SomeEntities.Where(e => e.SomePropertyToLookupOn == "some property").First(); context.Detach(item); ref1 = new WeakReference(item); var newItem = new SomeEntity {SomePropertyToLookupOn = "another value"}; context.SomeEntities.AddObject(newItem); ref2 = new WeakReference(newItem); context.SaveChanges(); context.SomeEntities.Detach(newItem); newItem = null; item = null; } context = null; GC.Collect(); Assert.IsFalse(ref1.IsAlive); Assert.IsFalse(ref2.IsAlive); First assert passes, second fails... I hope I'm missing something, it is late... But it appears that detaching a fetched item will actually release all handles on the object letting it be collected. However, for new objects something keeps a pointer and creates a memory leak. NB - this is EF 4.0 Anyone seen this before and worked around it? Thanks for your help! Tom

    Read the article

  • Regex: How do I match some regex logic 1 or more times?

    - by tom
    I already have some regex logic which says to look for a div tag with class=something. However, this might occur more than once (one after another). You can't simply add square brackets around that complex regex logic already (e.g. [:some complicated regex logic already existing:]* -- so how do you do it in regex? I want to avoid having to use the programming language logic to append that regex logic after itself if I can... Thanks

    Read the article

  • web design question (php/ajax)

    - by tom smith
    Hi guys... Hope this isn't a waste of your time. I'm working on a project, and it occured to me that there's a chunk of code out there, that should allow me to see how others have implemented this. I've got a project where I'm going to have a page, with a sel box. the user will select an item from the selList, and based on the item selected, a separate section of the page (areaB) will change in terms of the content/tbls being displayed. i then want to allow the user to go through a series of subpages in areaB, where the user goes through a submit/cancel/confirm process, where the stuff in areaB changes, with the rest of the page remaining the same... i'm trying to figure out the best approach to implement the on both client/server side. i could just have an ugly "if block" where i have abunch of logic, and i completely regen the page each time the user selects an action.. i could have an approach that might involve divs/frames, where i then just regen the targeted frame/div area.. is this even possible?? i could have some form of ajaxy process, which would only alter the targeted section(s) of the page... so.. i'm trying to talk to anyone who has ideas on how to do this, or more ideally, if you know of a good code (client/server) side example of this... that i can examine. i'd really appreciate it!! i've got a more detailed overview but didn't know if it would be cool to post it here... thanks.. tom

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • Can I reset a forgotten owner password with iText?

    - by Tom Hubbard
    With iText I can use Java to open a pdf and write it. If the pdf has an owner password I can still open it but it can not be written. Clearly the content is readable, it seems like at that point you could simply write the document to a new file. iText doesn't allow this, it throws a bad password exception. Is there a way around this?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >