Search Results

Search found 2346 results on 94 pages for 'dedicated'.

Page 61/94 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • How do I load only a single wordpress post via the url?

    - by Jared
    Hello, We are currently undergoing some reworking of our website - in the meantime, I am looking for a quick a dirty fix. We have wordpress setup, so that no-so-tech-savvy employees can add events, news, etc. However, there are currently sections on our site dedicated to what would be tags in Wordpress. For instance, we have posts in WP with the tag "events." It's easy enough to display all posts with that tag, but I need to do a PHP include on our old site, and only show JUST THE POST. I can use a rss2html tool, but it strips out somethings like necessary tables. So how do Display only a single WP post, without anything else (no menus, settings, no Wp interface) via a URL? I could use a theme that is stripped down (by using something like theme switcher), but I need it to only load that theme once, not be the default theme....

    Read the article

  • How is load balancing in big systems implemented?

    - by uther-lightbringer
    Hello, I'm wondering how is implemented load balancing in realy big applications like google or facebook. I know that in normal scenario there may be machine dedicated to this task, but I would like to know how is it resolved in realy big aplication with hundreds of thousans people accessing it in any given time. I am just wondering how exactly when one types google.com will that request find its way to concrete computer (are there multiple load balancers? and how is it set up and implemented that user's request will find the way to concrete balancer out of many others). I will realy appreciate if someone enlightens me this issue, thank you.

    Read the article

  • jQuery - search for previous selector relative to current

    - by Thomas Slater
    I have a form with 3 sections in it. Each section has a dedicated div for fairly long contextual help messages regarding the input that has focus. My question is, if I give each of these help divs a class of "form-tip" and each input that has a tip the class "has-tip", how can I get the tip to always show up in the previous "form-tip" div? Something like: $('.has-tip').each(focus(function() { var tip = $(this).attr('title'); // Find most recent $('.form-tip') in the DOM and populate it with tip })); I hope that makes sense... Thanks for any help.

    Read the article

  • Integrating Incoming Email Into a php/mysql App

    - by phirschybar
    I am looking to create an incoming email daemon switchboard that I can integrate with various remote php/mysql apps. Ideally I want to check the 'to' address to see if it is in a mysql database and if it is, have the email parsed and posted via CURL to a target destination as well as have attachments saved somewhere locally. I will likely set up a rackspace cloud server dedicated to this task (just accepting emails and posting to 3rd party APIs). However, I do not know where to start. Which server platform / distribution should I go with? Which software needs to be customized, etc?

    Read the article

  • Getting a permission error when trying to connect to sql database

    - by Matt
    I have a sql server on a dedicated machine, running SQL 2008. I have the IP of the box, a database setup on it. I've built a small script that just does a connection test, and when I run it, I get the following error. Request for the permission of type 'System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've been told by the admin that SQL remote access has been granted for my IP address. Anybody know what's wrong?

    Read the article

  • Incremental build with continuous integration server

    - by altern
    Does any of the continuous integration servers support incremental builds or filtering mechanism? For example, I want to configure some kind of filtering (as I call it) so that committing file to the specific folder will not cause full (clean) build triggering, but will cause only incremental build. By 'incremental build' I mean process that will put only committed files to the required place and all application would not need to be rebuilt from scratch. Working with images is good example of the case when we need such filtering and thus incremental builds: why do we need to rebuild whole application if only images have been changed? What we need to do is just place images to the dedicated place on server.

    Read the article

  • How can I clean up this SELECT query?

    - by Cruachan
    I'm running PHP 5 and MySQL 5 on a dedicated server (Ubuntu Server 8.10) with full root access. I'm cleaning up some LAMP code I've inherited and I've a large number of SQL selects with this type of construct: SELECT ... FROM table WHERE LCASE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( strSomeField, ' ', '-'), ',', ''), '/', '-'), '&', ''), '+', '') ) = $somevalue Ignoring the fact that the database should never have been constructed to require such a select in the first place, and the $somevalue field will need to be parameterised to plug the gaping security hole, what is my best option for fixing the WHERE condition into something less offensive? If I was using MSSQL or Oracle I'd simply put together a user-defined function, but my experience with MySQL is more limited and I've not constructed a UDF with it before, although I'm happy coding C. Update: For all those who've already raised their eyebrows at this in the original code, $somevalue is actually something like $GET['product']—there are a few variations on the theme. In this case the select is pulling the product back from the database by product name—after stripping out characters so it matches what could be previously passed as a URI parameter.

    Read the article

  • Which libraries use the "We Know Where You Live" optimization for std::make_shared?

    - by KnowItAllWannabe
    Over two years ago, Stephan T. Lavavej described a space-saving optimization he implemented in Microsoft's implementation of std::make_shared, and I know from speaking with him that Microsoft has nothing against other library implementations adopting this optimization. If you know for sure whether other libraries (e.g., for Gnu C++, Clang, Intel C++, plus Boost (for boost::make_shared)) have adopted this implementation, please contribute an answer. I don't have ready access to that many make_shared implementations, nor am I wild about digging into the bowels of the ones I have to see if they've implemented the WKWYL optimization, but I'm hoping that SO readers know the answers for some libraries off-hand. I know from looking at the code that as of Boost 1.52, the WKWYL optimization had not been implemented, but Boost is now up to version 1.55. Note that this optimization is different from std::make_shared's ability to avoid a dedicated heap allocation for the reference count used by std::shared_ptr. For a discussion of the difference between WKWYL and that optimication, consult this question.

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I am not sure if this is the write place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the power and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20?

    Read the article

  • Tell AppleScript to go to a specific window in Excel

    - by Nick
    I've got a script that pulls information from an Excel(Mac Excel'04) spreadsheet, and processes it through a local database. My problem(which is temporary, pending a dedicated scripting machine w/ Excel '08) is when I need to work on another spreadsheet in Excel. I want to ensure that the AppleScript continues reading data from the correct spreadsheet. Is there a way to pass reference to the specific Excel file in AppleScript, as opposed to just telling the Application in general? Or possibly just referencing the spreadsheet without having to have it open ... ?

    Read the article

  • Where to place the login/authentication related actions in MVC

    - by rogeriopvl
    I've searched around and found that when implementing an authentication module in MVC architecture some people opt to place the login related actions in the User controller while others place it in a controller dedicated to authentication only. In pseudo-java-like code: class UserController extends Controller { public login() { //... } } Accessed with http://mydomain.com/user/login. vs. class AuthController extends Controller { public login() { //... } } Accessed with http://mydomain.com/auth/login. I would like to know which approach is better, and why. That is, if there's really any difference at all. Thanks in advance.

    Read the article

  • Bash init.d script detect that mysqld has started and is running

    - by Ricket
    I'm working on my dedicated server running CentOS. I found out that one of my applications which starts up via a script in /etc/init.d/ requires MySQL to be running, or else it throws an error, so essentially I currently have to start it by hand. How can I detect, in a bash script (#!/bin/sh), whether the MySQL service has started yet? Is there some way to poll port 3306 until it is open to accept connections, and only then continue with the script? Or maybe set an order so that the script doesn't run until the mysqld script runs?

    Read the article

  • Get PropertyInfo from property instead of name

    - by Sam
    Say, for example, I've got this simple class: public class MyClass { public String MyProperty { get; set; } } The way to get the PropertyInfo for MyProperty would be: typeof(MyClass).GetProperty("MyProperty"); This sucks! Why? Easy: it will break as soon as I change the Name of the Property, it needs a lot of dedicated tests to find every location where a property is used like this, refactoring and usage trees are unable to find these kinds of access. Ain't there any way to properly access a property? Something, that is validated on compile time? I'd love a command like this: propertyof(MyClass.MyProperty);

    Read the article

  • Controlling access to large files in Apache

    - by obeattie
    Hi there, I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL). Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?" If this makes any sense, help would be much appreciated :) P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.

    Read the article

  • How to automatically check out a database file in a source controlled web application ?

    - by TheRHCP
    Hello, I am working on an ASP.NET web application, we are a small team (4 students) and we do not have access to a dedicated server to host the database instance. So for this web application we decided just to put the database file in the App_Data folder. The problem is that our project is source controled on TFS, so every time you open the solution and try to launch the web application, we get an expcetion saying that database is read-only. That is logical because the databse file is not automatically checked-out. Is there a workaround to avoid a manual check-out of the database file everytime we open the solution ? Thanks.

    Read the article

  • Securely using exec with PHP to run ffmpeg

    - by Venkat D.
    I would like to run ffmpeg from PHP for video encoding purposes. I was thinking of using the exec or passthru commands. However, I have been warned that enabling these functions is a security risk. In the words of my support staff: The directive 'disable_functions' is used to disable any functions that allow the execution of system commands. This is for more security of the server. These PHP functions can be used to crack the server if not used properly. I'm guessing that if exec is enabled, then someone could (possibly) execute an arbitrary unix command. Does anyone know of a secure way to run ffmpeg from PHP? By the way, I'm on a dedicated server. Thanks ahead of time!

    Read the article

  • Sending a large file over network continuously

    - by David Parunakian
    Hello, We need to write software that would continuously (i.e. new data is sent as it becomes available) send very large files (several Tb) to several destinations simultaneously. Some destinations have a dedicated fiber connection to the source, while some do not. Several questions arise: We plan to use TCP sockets for this task. What failover procedure would you recommend in order to handle network outages and dropped connections? What should happen upon upload completion: should the server close the socket? If so, then is it a good design decision to have another daemon provide file checksums on another port? Could you recommend a method to handle corrupted files, aside from downloading them again? Perhaps I could break them into 10Mb chunks and calculate checksums for each chunk separately? Thanks.

    Read the article

  • PHP+MYSQL Server Config

    - by Matias
    Hi guys, I am parsing an XML file with PHP and inserting the rows in a MYSQL database. I am using PHP simplexml_load_files to load the XML and a foreach to loop through the array and insert the rows into my database. It works perfectly fine with small files i am testing, but it comes to reality I need to parse a large 500mb XML file and nothing happens. I was wondering what was the right Php.ini config for this case ? I have a VPS Linux Cent OS, with 256 mb of dedicated Memory and MYSQL 5.0.5. I have also set php memory_limit = 256M (maximum of my server) Any suggestions, similar experiences will be greatly appreciated Thanks

    Read the article

  • What is the reliable way to return error code from an MPI program?

    - by mezhaka
    The MPI standard (page 295) says: Advice to users. Whether the errorcode is returned from the executable or from the MPI process startup mechanism (e.g., mpiexec), is an aspect of quality of the MPI library but not mandatory. Indeed I had no success in running the following code: if(0 == my_rank) { FILE* parameters = fopen("parameters.txt", "r"); if(NULL == parameters) { fprintf(stderr, "Could not open parameters.txt file.\n"); printf("Could not open parameters.txt file.\n"); exit(EXIT_FAILURE); //Tried MPI_Abort() as well } fscanf(parameters, "%i %f %f %f", N, X_DIMENSION_Dp, Y_DIMENSION_Dp, HEIGHT_DIMENSION_Dp); fclose(generation_conf); } I am not able to get the error code back into the shell in order to make a decision on further actions. Neither of two error messages are printed. I think I might write the error codes and messages to a dedicated file. Has anyone ever had a similar problem and what were the options you've considered to do a reliable error reporting?

    Read the article

  • DBA's say no to SQL Server DTC?

    - by NabilS
    I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this? Thanks

    Read the article

  • Scheduled cron job to check for pending activity

    - by luckytaxi
    Using PHP ... This is for my personal use so I'm thinking maybe 3-4 emails a day. I'm at a point where I can send an email to a dedicated email address where my script parses the message and stores it into a DB. Now, I need to figure out the best way to check the records in the DB for any upcoming task. I feel like I'm missing something, maybe like a trigger field as to when a reminder should go out. However, that's not a concern to me at the moment since I'll just send an alert 15 mins prior to the due date. Question is, shoudl I run a cron job that queries the DB every minute? I take it the query will have to say something like "select all tasks that is due within 15 minutes."

    Read the article

  • delete all records except the id I have in a python list

    - by jay_t
    Hi all, I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ... Currently I convert my list to a string so it fits in something like this: cursor.execute("""delete from table where id not in (%s)""",(list)) Which doesn't feel right and I have no idea how long list is allowed to be, .... What's the most efficient way of doing this from python? Altering the structure of table with an extra field to mark/unmark records for deletion would be great but not an option. Having a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible. Thanks,

    Read the article

  • When configuring daily backups, which files should I include to be sure I have the MySQL db's

    - by user575599
    I have a dedicated LAMP server with cpanel hosting 100 websites (some of them have MySQL db's). I am currently using the Jungle Disk Server Edition to backup our files from our LAMP server to Amazon S3. Once a week were are backing up the entire cpanel which is an enormous strain on resources but that is a separate issue. Now, what I want to do is to set up a daily job to backup just the HTML files and the MySQL db's. If I just backup the "public_html" folder will my MySQL database info be stored in that directory? Would backing up the public_html folder be enough to recover the db? I can find plenty of resources online about how to manually backup MySQL db's but with a 100 sites, I need it automated. I'm hoping for an easy solution where I can just grab a folder to backup each day.

    Read the article

  • Users lose access privileges every night

    - by armannvg
    Hi I'm having a problem with a WSS 3.0 instance where users keep losing their upload access rights every night. Users can always enter the Site involved and can browse lists, document libraries etc. just like normal, but they get an error when trying to upload documents, getting access denied error. This can be fixed by applying their access rights again in WSS, but it has to be done every morning, making that solution not feasible. This is a dedicated virtual server and no other applications running on it. Has anyone had a similar problem regarding WSS and Sharepoint access rights ?

    Read the article

  • How to increase thread-pool threads on IIS 7.0

    - by Xaqron
    Environment: Windows Server 2008 Enterprise, IIS 7.0, ASP.NET 2.0 (CLR), .NET 4.0 I have an ASP.NET application with no page and no session(HttpHandler). It a streaming server. I use two threads for processing each request so if there are 100 connected clients, then 200 threads are used. This is a dedicated server and there's no more application on the server. The problem is after 200 clients are connected (under stress testing) application refuses new clients, but if I increase the worker threads of application pool (create a web garden) then I can have 200 new happy clients per w3wp process. I feel .NET thread pool limit reaches at that point and need to increase it. Thanks

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >