Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 455/500 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Removing the port number from URL

    - by DrewSSP
    I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration. gunicorn_config.py command = '/opt/myenv/bin/gunicorn' pythonpath = '/opt/myenv/top-chart-app/' bind = '162.243.76.202:8001' workers = 3 root@django-app:~# nginx config server { server_name 162.243.76.202; access_log off; location /static/ { alias /opt/myenv/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } supervisor config [program:top_chart_gunicorn] command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi autostart=true autorestart=true stderr_logfile=/var/log/supervisor_gunicorn.err.log stdout_logfile=/var/log/supervisor_gunicorn.out.log Thanks for taking a look.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • Help me plan larger Qt project

    - by Pirate for Profit
    I'm trying to create an automated task management system for our company, because they pay me to waste my time. New users will create a "profile", which will store all the working data (I guess serialize everything into xml rite?). A "profile" will contain many different tasks. Tasks are basically just standard computer janitor crap such as moving around files, reading/writing to databases, pinging servers, etc.). So as you can see, a task has many different jobs they do, and also that tasks should run indefinitely as long as the user somehow generates "jobs" for them. There should also be a way to enable/disable (start/pause) tasks. They say create the UI first so... I figure the best way to represent this is with a list-view widget, that lists all the tasks in the profile. Enabled tasks will be bold, disabled will be then when you double-click a task, a tab in the main view opens with all the settings, output, errors,. You can right click a task in the list-view to enable/disable/etc. So each task will be a closable tab, but when you close it just hides. My question is: should I extend from QAction and QTabWidget so I can easily drop tasks in and out of my list-view and tab bar? I'm thinking some way to make this plugin-based, since a lot of the plugins may share similar settings (like some of the same options, but different infos are input). Also, what's the best way to set up threading for this application?

    Read the article

  • Update Web Reference in Visual Studio

    - by NeilD
    Hi, I have inherited a web site project that makes use of a number of WCF Web Services hosted on a BizTalk server. We have two environments that I need to deploy this project to, with different URLs for the different BizTalk servers. i.e. In the Staging environment, I need to point the services at xx.xx.xx.101 In the Live environment, I need to point them at xx.xx.xx.102, or whatever. Currently, we've got all of the URLs stored in keys in the web.config file, so that we can change them dynamically... Unfortunately this isn't working! If I change the URL in the web.config to something other than what the project was compiled with, I get an error when calling the service: Server did not recognize the value of HTTP Header SOAPAction: xx.xx.xx.101\ServiceName\MethodName I'm told that the only way they've known to deploy this is to update the web.config URLs, change all of the web references in Visual Studio to match, click on "update web reference" for each reference in Visual Studio, and then compile. It's driving me mad! I've written a pre-build NAnt script to go through and replace all instances of the URL found anywhere in the project directory, and even that isn't making any difference. There must be something else being pulled down from the service when I click the "update reference", but I'm new to working with web services, and so I'm not sure what. Does anyone have any ideas? Is there a way to do this programatically? Thanks.

    Read the article

  • JSR-299 (CDI) configuration at runtime

    - by nsn
    I need to configure different @Alternatives, @Decorators and @Injectors for different runtime environments (think testing, staging and production servers). Right now I use maven to create three wars, and the only difference between those wars are in the beans.xml files. Is there a better way to do this? I do have @Alternative @Stereotypes for the different environments, but even then I need to alter beans.xml, and they don't work for @Decorators (or do they?) Is it somehow possible to instruct CDI to ignore the values in beans.xml and use a custom configuration source? Because then I could for example read a system property or other environment variable. The application exclusively runs in containers that use Weld, so a weld-specific solution would be ok. I already tried to google this but can't seem to find good search terms, and I asked the Weld-Users-Forums, but to no avail. Someone over there suggested to write my own custom extension, but I can't find any API to actually change the container configuration at runtime. I think it would be possible to have some sort of @ApplicationScoped configuration bean and inject that into all @Decorators which could then decide themselves whether they should be active or not and then in order to configure @Alternatives write @Produces methods for every interface with multiple implementations and inject the config bean there too. But this seems to me like a lot of unnecessary work to essentially duplicate functionality already present in CDI?

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • C#/.NET: FInd out whether a server exists, Query DNS for SVC records.

    - by TomTom
    Writing a cleint/Server tool I am tasked trying to find a server to connect to. I would love to make things as easy as possible for the user. As such, my idea is to: CHeck whether specific servers (coded by name) exist (like "mail.xxx" for a mail server, for example - my exampüle is not a mail server;) Query otherwise for DNS SVC records, allowing the admin to configure a server location for a specific serivce (that the client connects to). The result is that the user may have to enter only a domain name, possibly even not even that (using the registered standard domain of the computer in a LAN environment). Anyone ideas how: To find out whether a server exists and answers (i.e. is online) in the fastest way? TCP can take a long time if the server is not there. A UDP style ping sounds like a good idea to me. PING itself may be unavailable. Anyonw knows how to ask from withint .NET best for a SVC record in a specific (the default) domain?

    Read the article

  • Improve a regex statement in order to be as efficient as it can be

    - by user551625
    I have a PHP program that, at some point, needs to analyze a big amount of HTML+javascript text to parse info. All I want to parse needs to be in two parts. Seperate all "HTML goups" to parse Parse each HTML group to get the needed information. In the 1st parse it needs to find: <div id="myHome" And start capturing after that tag. Then stop capturing before <span id="nReaders" And capture the number that comes after this tag and stop. In the 2nd parse use the capture nº 1 (0 has the whole thing and 2 has the number) from the parse made before and then find . I already have code to do that and it works. Is there a way to improve this, make it easier for the machine to parse? preg_match_all('%<div id="myHome"[^>]>(.*?)<span id="nReaders[^>]>([0-9]+)<"%msi', $data, $results, PREG_SET_ORDER); foreach($results AS $result){ preg_match_all('%<div class="myplacement".*?[.]php[?]((?:next|before))=([0-9]+).*?<tbody.*?<td[^>]>.*?[0-9]+"%msi', $result[1], $mydata, PREG_SET_ORDER); //takes care of the data and finish the program Note: I need this for a freeware program so it must be as general as possible and, if possible, not use php extensions ADD: I ommitted some parts here because I didn't expect for answers like those. There is also a need to parse text inside one of the tags that is in the document. It may be the 6th 7th or 8th tag but I know it is after a certain tag. The parser I've checked (thx profitphp) does work to find the script tag. What now? There are more than 1 tag with the same class. I want them all. But I want only with also one of a list of classes..... Where can I find instructions and demos and limitations of DOM parsers (like the one in http://simplehtmldom.sourceforge.net/)? I need something that will work on, at least, a big amount of free servers.

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • Is there a way to specify a per-host deploy_to path with Capistrano?

    - by Chad Johnson
    I have searched and searched and asked a question already and have not received a clear answer. I have the following deploy script (snippet): set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain1.com", "domain2.com" role :app, "domain1.com", "domain2.com" role :db, "domain1.com", :primary => true, :norelease => true role :db, "domain2.com", :norelease => true As you see, I have set deploy_to to a specific path. And, I also have specified multiple web servers. However, each web server should have a different deployment path. I want to be able to run "cap deploy" and deploy to all hosts in one shot. I am NOT trying to deploy to staging and then to production. This is all production. My question is: how exactly do I specify a path per server? I have read the "Roles" documentation for Capistrano, and this is unclear. Can someone please post a deploy file example? I have read the documentation, and it is unclear how to do this. Does anyone know? Am I crazy? Am I thinking of this wrong or something? No answers anywhere online. Nowhere. Nothing. Please, someone help.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • Cross Thread problem C#

    - by Frederik Witte
    Hello people - I got this code (lg_log is a listbox, and i want it to log the start_server.bat) Here is the code i got: public void bt_play_Click(object sender, EventArgs e) { lg_log.Items.Add("Starting Mineme server .."); string directory = Directory.GetCurrentDirectory(); var info = new ProcessStartInfo(directory + @"\start_base.bat") {UseShellExecute = false, RedirectStandardOutput = true, CreateNoWindow = true, WorkingDirectory = directory + @"\Servers\Base"}; var proc = new Process { StartInfo = info, EnableRaisingEvents = true }; proc.OutputDataReceived += (obj, args) => { if (args.Data != null) { lg_log.Items.Add(args.Data); } }; proc.Start(); proc.BeginOutputReadLine(); lg_log.Items.Add("Server is now running!"); proc.WaitForExit(); } When i run this, i'll get an error .. Anybody can help me? I'll rate the answer up! :D Edit: The error i get is this: System.InvalidOperationException Hope it helps :) The error comes at the lg_log.Items.Add(args.Data); code line

    Read the article

  • Doing some downloading without blocking you app

    - by Code
    Hi guys, I'm working on my first app that's doing a few different web connections at once. My first screen is my Menu. And at the bottom of viewDidLoad of MenuViewController i call a method that gets and parses a .xml file that is located on my webserver. Also at the bottom of viewDidLoad i do FootballScores = [[FootBallScores alloc] init]; and FootballScores makes a connection to a html page which it loads into a string and then parses out data. Now since both of these are getting called at the bottom of viewDidLoad of the class thats is responsible for the main menu(first screen in the app) it means the app is kinda slow to load. What is the right way to do the above? Should i remove the 2 pieces of code from my viewDidLoad and replace with maybe dataGetterOne = [NSTimer scheduledTimerWithTimeInterval:1.000 target:self selector:@selector(xmlParser) userInfo:nil repeats:NO]; dataGetterTwo = [NSTimer scheduledTimerWithTimeInterval:2.000 target:self selector:@selector(htmlParser) userInfo:nil repeats:NO]; This would mean that the methods get called later on and the viewDidLoad gets to finish before the i try get the data from the web servers. Making 2 connections to we bservers a second apart to quick? Can the iphone handle having 2 connections open at once? I'm really unsure of anything bad/dangerous I am doing in regards to connections. Many Thanks -Code

    Read the article

  • ASP.NET problem - Firebug shows odd behaviour

    - by Brandi
    I have an ASP.NET application that does a large database read. It loads up a gridview inside an update panel. In VS2008, just running on my local machine, it runs fantastically. In production (identical code, just published and put on one of our network servers), it runs slow as dirt. Debug is set to false, so this is not the cause of the slow down. I'm not an experienced web developer, so besides that, feel free to suggest the obvious. I have been using Firebug to determine what's going on, and here is what that has turned up: On production, there are around 500 requests. The timeline bar is very short. The size column varies from run to run, but is always the same for the duration of the run. Locally, there are about 30 requests. The timeline bar takes up the entire space. Can anyone shed some light on why this is happening and what I can do to fix it? Also, I can't find much of anything on the web about this, so any references are helpful too.

    Read the article

  • Reference remotely located assembly (web uri) from locally installed application?

    - by moonground.de
    Hi Stackoverflowers! :) We have a .NET application for Windows which is installed locally by Microsoft Installer. Now we have the need to use additional assemblies which are located online at our Web Servers. We'd like to refer to a remote uri like https://www.ourserver.com/OurProductName/ExternalLib.dll and reveal additional functionality, which is described roughly by a known common ("AddIn/Plugin") Interface. These are not 3rd Party Plugins, we just want be able to exchange parts of the application frequently, without the need to have frequent software updates. Our first idea was to add some kind of "remote refence" in Visual Studio by setting the path to the remote assembly uri. But Visual Studio downloaded the assembly immediately to a temporary directory, adding a reference to it. Our second attempt then, is simply using a WebRequest (or WebClient) to retrieve a binary stream of the Assembly, loading it "from image" by using Assembly.Load(...). This actually works, but is not very elegant and requires more additional programming for verification etc. We hoped Clickonce would provide useful techniques but apparently it's suitable for standalone applications only. (Correct me?) Is there a way (.net native or by framework/api) to reference remotely located assemblies? Thanks in advance and have a happy easter!

    Read the article

  • Running Sitecore Production Site under a Virtual Directory

    - by danswain
    We are using Sitecore 6 on a Windows Server 2003 (32bit) dev machine. I know it's not recommended for the CMS editing site, but we've been told it is possible to get the front-end Sitecore websites to run from within a virtual directory. Here's the issue: we'd like to achieve what the below poor mans diagram shows. We have a website (.net 1.1) /WebSiteRoot (.net 1.1) | | |---- Custom .net 1.1 Web Application | |---- SiteCore frontend WebApplication (.net 2.0) | |---- Custom .net 2.0 WebApplication The Sitecore webApplication would contain the Sitecore pipeline in its web.config and we'd make use of the section to configure the virtual folder to allow for where our Sitecore app sits and point it to the appropriate place in the Content Tree. Is it possible to pull this off? This is just the customer facing website, there will be no CMS editing functionality on these servers, that will be done from a more standard Sitecore install inside the firewall on a different server. The errors we're encountering are centered around loading the the various config files in the App_Config folder. It seems to do a Server.MapPath on "/" initially (which is wrong for us) so we've tried putting absolute paths in the web.config and still no joy (I think there must be some hardcoded piece that looks for the Include directory). Any help would be greatly appreciated. Thanks

    Read the article

  • Cannot create a new VS data connection in Server Explorer

    - by Seventh Element
    I have a local instance of SQL Server 2008 express edition running on my development PC. I'm trying to create a new data connection through Visual Studio Server Explorer. The steps are the following: Right click the "Data Connections" node = Choose Data Source. I select "Microsoft SQL Server" as the data source. The "Add Connection" dialog window appears. I select my local server instance = "Test connection" works fine. I select "AdventureWorks" as the database name = "Test connection" works fine. Next I hit the "Ok" button = Error message: "This server version is not supported. Only servers up to MS SQL Server 2005 are supported." I'm using Visual Studio 2008 Professional Edition. The target framework of the application is ".NET framework 3.5". I have a reference to System.Data (framework v2.0) and cannot find another version of the assembly on my system. Am I referencing the wrong assembly? How can I fix this problem?

    Read the article

  • only default controller is loading for all request - Critical

    - by Jayapal Chandran
    Hi, My codeigniter project is in live. I have two copies of it. One in the root and another in a subfolder. Both are configered to work normal. The root copy if the one which was made after testing in a subfolder. While running from the a subfolder all worked well. But when copied to the root folder the default controller is loading for all requests. But were as in subfolders and in other servers it is working well. It is like the following A true copy in root folder like sitename.com and another true copy in a subfolder like sitename.com/abc when requesting like this sitename.com/gallery the default controller is loaded instead of gallery controller. When i tried like this sitename.com/index.php/gallery/ then it worked well... but sitename.com/gallery/ is showing only the default controller. that is the index page. here is my htaccess... php_flag magic_quotes_gpc off php_flag short_open_tag on RewriteEngine on RewriteCond $1 !^(index\.php|images|css|static|font|xml|flash|galleryimages|htc|store|robots\.txt) RewriteRule ^(.*)$ index.php/$1 [L] The server is Linux barracuda.elinuxservers.com 2.6.27.18-21 #1 SMP Tue Aug 25 18:13:37 UTC 2009 i686 PHP Version 5.2.9

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox [closed]

    - by Cook
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • Why does PDO print my password when the connection fails?

    - by Joe Hopfgartner
    I have a simple website where I establish a connection to a Mysql server using PDO. $dbh = new PDO('mysql:host=localhost;dbname=DB;port=3306', 'USER', 'SECRET',array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8")); I had some traffic on my site and the servers connection limit was reached, and the website throw this error, with my PLAIN password in it! Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[08004] [1040] Too many connections' in /home/premiumize-me/html/index.php:64 Stack trace: #0 /home/premiumize-me/html/index.php(64): PDO-__construct('mysql:host=loca...', 'USER', 'SECRET', Array) #1 {main} thrown in /home/premiumize-me/html/index.php on line 64 Ironically I switched to PDO for security reasons, this really shocked me. Because this exact error is something you can provoke very easily on most sites using simple http flooding. I now wrapped my conenction into a try/catch clause, but still. I think this is catastrophic! So I am new to PDO and my questino is: What do I have to consider to be safe! How to I establish a connection in a secure way? Are there other known security holes like this one that I have to be aware of?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >