Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 726/861 | < Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • rails Creating a model instance automatically when another is created

    - by bob
    Hello I have a user model and a ratings model. Whenever a new user is created I want to create a new feedback model with it automatically. Each user model has one feedback model and each feedback model has many ratings. My Classes class User < ActiveRecord::Base end class Feedback < ActiveRecord::Base belongs_to :user has_many :ratings end class Rating < ActiveRecord::Base belongs_to :feedback end My database tables -user doesn't have anything special -feedback has user_id. This user_id should be the same as the user that has just been created. For example, user_id of 1 is created, then a feedback model should be created that belongs to user_id of 1. So the user_id column in the feedback database will also be 1. - Rating has a feedback_id and a user_id the user_id in this case is the id of the person who submitted the rating. I am having it assigned through the build command. I believe my process is correct here. The Goal The goal is to have each user have a feedback table that has many ratings from other users. So if someone goes to the feedback page, they will see all the ratings given and by who. Is there a better way to approach this? How do you create a model of feedback with the same id as the user being created right when a new user is created. The idea is that when a user is created a feedback is created associated with that user so people can then go to http://localhost:3000/users/1/feedback/ and submit new ratings. I'm trying to bypass having a user rate another user with just a ratings model because I'm not sure how to do it.

    Read the article

  • Script to add user to MediaWiki

    - by Marquis Wang
    I'm trying to write a script that will create a user in MediaWiki, so that I can run a batch job to import a series of users. I'm using mediawiki-1.12.0. I got this code from a forum, but it doesn't look like it works with 1.12 (it's for 1.13) $name = 'Username'; #Username (MUST start with a capital letter) $pass = 'password'; #Password (plaintext, will be hashed later down) $email = 'email'; #Email (automatically gets confirmed after the creation process) $path = "/path/to/mediawiki"; putenv( "MW_INSTALL_PATH={$path}" ); require_once( "{$path}/includes/WebStart.php" ); $pass = User::crypt( $pass ); $user = User::createNew( $name, array( 'password' => $pass, 'email' => $email ) ); $user->confirmEmail(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); Thanks!

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • Apache attack on compromised server, iframe injected by string replace

    - by Quang-Tuan Luong
    My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of <body> by <iframe link to malware></iframe></body> For example if I browse a file residing on the server consisting of: </body> </body> Then my browser sees a file consisting of: <iframe link to malware></iframe></body> <iframe link to malware></iframe></body> I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy. Thanks for any help. Tuan. PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.

    Read the article

  • How to install Eclipse + PHP Development Tools (PDT) + Debugger on Mac in The Year 2010

    - by aphex5
    I had a lot of trouble installing Eclipse and PDT on my system. It took two days, largely because all the tutorials I could find were out of date (written in 2008, it's 2010 now) and various steps they included were no longer necessary, broken, or irrelevant. I wanted to write my process here so it could be improved upon (via wiki) as time goes on. Install Eclipse without PHP plugin ("Eclipse Classic"). This will give you a complete Eclipse, which I find preferable, as the UI is more fleshed out (e.g. you get a default list of Perspectives, which helps you understand what Perspectives are.) Install PDT SDK with the Help Install New Software menu item. You'd think you'd be done here, but if you try to run something, it'll fail complaining of not having a debugger. Install the Zend Debugger. It'll fail if you try to use the Install New Software option, as many tutorials suggest ("No repository found containing osgi.bundle.org.zend.php.debug.debugger.5.3.7.v20091116".) Instead, download it from http://www.zend.com/en/community/pdt, and manually copy the features/ and plugins/ directory into your Eclipse install (these instructions are not written anywhere). Restart Eclipse Monkey with preferences for a while -- if you followed a previous tutorial and tried to manually add your php executable to Eclipse prefs (/usr/bin/php), remove it (PHP PHP Executables). Set one of the Zend Debugger executables to the default. If you've already tried to execute a .php file, remove the existing "Run" profile you (maybe weren't aware that you) created (Run Debug Configurations...). Eclipse works! You should be able to run a .php file as a script just fine.

    Read the article

  • How do I DYNAMICALLY cast in C# and return for a property

    - by ken-forslund
    I've already read threads on the topic, but can't find a solution that fits. I'm working on a drop-down list that takes an enum and uses that to populate itself. i found a VB.NET one. During the porting process, I discovered that it uses DirectCast() to set the type as it returns the SelectedValue. See the original VB here: http://jeffhandley.com/archive/2008/01/27/enum-list-dropdown-control.aspx the gist is, the control has Type _enumType; //gets set when the datasource is set and is the type of the specific enum The SelectedValue property kind of looks like (remember, it doesn't work): public Enum SelectedValue //Shadows Property { get { // Get the value from the request to allow for disabled viewstate string RequestValue = this.Page.Request.Params[this.UniqueID]; return Enum.Parse(_enumType, RequestValue, true) as _enumType; } set { base.SelectedValue = value.ToString(); } } Now this touches on a core point that I think was missed in the other discussions. In darn near every example, folks argued that DirectCast wasn't needed, because in every example, they statically defined the type. That's not the case here. As the programmer of the control, I don't know the type. Therefore, I can't cast it. Additionally, the following examples of lines won't compile because c# casting doesn't accept a variable. Whereas VB's CType and DirectCast can accept Type T as a function parameter: return Enum.Parse(_enumType, RequestValue, true); or return Enum.Parse(_enumType, RequestValue, true) as _enumType; or return (_enumType)Enum.Parse(_enumType, RequestValue, true) ; or return Convert.ChangeType(Enum.Parse(_enumType, RequestValue, true), _enumType); or return CastTo<_enumType>(Enum.Parse(_enumType, RequestValue, true)); So, any ideas on a solution? What's the .NET 3.5 best way to resolve this?

    Read the article

  • Filtering most out of XML with XSL?

    - by Gnudiff
    I need to transform a lot of XML files (Fedora export) into a different kind of XML. Trying to do it with XSL stylesheets and checking with the msxsl transformer. Supposedly I have xml file like this (assuming there are actually other nodes inside AAA, OBJ, amd all other nodes), Source.XML: <DOC> <AAA> <STUFF>example</STUFF> <OBJ> <OBJVERS id="A1" CREATED="2008-02-18T13:28:08.245Z"/> <OBJVERS id="A2" CREATED="2008-02-19T10:42:41.965Z"/> <OBJVERS id="A13" CREATED="2009-03-16T12:43:11.703Z"/> </OBJ> </AAA> <FFF/> <GGG/> <DDD> <FILE /> </DDD> </DOC> Which I need to look something like this (Target.XML): <MYOBJ> <ELEM>contents of OBJVERS with the biggest id OR creation date (whichever is easier to do) go here</ELEM> <IMAGE> contents of <FILE> node go here</IMAGE> </MYOBJ> The main problem that I have is that since I am new to XSL (and for this particular task do not have enough time to learn it properly) is that I can't understand how to tell XSL processor not to process anything else, I keep getting output from , for example. Update: basically, I solved this problem meanwhile. I will post my own answer and close the question. Update2: OK, Andrew's answer works, too, so I am just accepting it. :)

    Read the article

  • Error Converting PIL B&W images to Numpy Arrays

    - by Elliot
    I am getting weird errors when I try to convert a black and white PIL image to a numpy array. An example of the code I am working with is below. if image.mode != '1': image = image.convert('1') #convert to B&W data = np.array(image) #convert data to a numpy array n_lines = data.shape[0] #number of raster passes line_range = range(data.shape[1]) for l in range(n_lines): # process one horizontal line of the image line = data[l] for n in line_range: if line[n] == 1: write_line_to(xl, z+scale*n, speed) #conversion to other program code elif line[n] == 0: run_to(xl, z+scale*n) #conversion to other program code I have tried this using both array and asarray for the conversion, and gotten different errors. If I use array, then the data I get out is nothing like what I put in. It looks like several very shrunken partial images side by side, with the remainder of the image space filled in in black. If I use asarray, then the entirety of python crashes during the raster step (on a random line). If I work with a greyscale image ('L'), then neither of these errors occurs for either array or asarray. Does anyone know what I am doing wrong? Is there something odd about the way PIL encodes B&W images, or something special I need to pass numpy to make it convert properly?

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • Working with a database

    - by Susan
    I have an existing program for a bank account, the user can create an account then withdraw or deposit money to the account. As a transaction is processed, labels are being used to show the current information such as (Beginning Balance, Transaction Fee, Withdrawal Amount, and Ending Balance). Now, I need to be able to keep track of the transactions being processed in a SQL database. I know how to add a database to the project and then set it up to display all of the data from the table in a gridview. This is assuming that I manually entered data into the table; however, the table should be blank when the program starts and as I process transactions, then the data should be written to the table. How do I bind my existing fields (labels) to a datatable and send the text to the table. The book that I have is all related to just displaying the data that is already in the table and I have been through a couple of tutorial online and they seem to be about the same subject. I haven't found anything on how to do what I am looking for. Can someone help me out here. I don't mind references to other websites that might have the answer. Thanks, Susan

    Read the article

  • Efficient Method for Preventing Hotlinking via .htaccess

    - by Michael Robinson
    I need to confirm something before I go accuse someone of ... well I'd rather not say. The problem: We allow users to upload images and embed them within text on our site. In the past we allowed users to hotlink to our images as well, but due to server load we unfortunately had to stop this. Current "solution": The method the programmer used to solve our "too many connections" issue was to rename the file that receives and processes image requests (image_request.php) to image_request2.php, and replace the contents of the original with <?php header("HTTP/1.1 500 Internal Server Error") ; ?> Obviously this has caused all images with their src attribute pointing to the original image_request.php to be broken, and is also the wrong code to be sending in this case. Proposed solution: I feel a more elegant solution would be: In .htaccess If the request is for image_request.php Check referrer If referrer is not our site, send the appropriate header If referrer is our site, proceed to image_request.php and process image request What I would like to know is: Compared to simply returning a 500 for each request to image_request.php: How much more load would be incurred if we were to use my proposed alternative solution outlined above? Is there a better way to do this? Our main concern is that the site stays up. I am not willing to agree that breaking all internally linked images is the best / only way to solve this. I refuse to tell our users that because of something WE changed they must now manually change the embed code in all their previously uploaded content.

    Read the article

  • object datasource problems in visual studio 2008,

    - by kamal
    After going to the process of adding the various attributes like insert,delete and update.But when i run it through the browser ,editing works but updating and deleting doesn't !(for the update and shows the same thing for delete,my friends think i need to use codes to repair the problems,can you help me please.it shows this: Server Error in '/WebSite3' Application. ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id.] System.Web.UI.WebControls.ObjectDataSourceView.GetResolvedMethodData(Type type, String methodName, IDictionary allParameters, DataSourceOperation operation) +1119426 System.Web.UI.WebControls.ObjectDataSourceView.ExecuteUpdate(IDictionary keys, IDictionary values, IDictionary oldValues) +1008 System.Web.UI.DataSourceView.Update(IDictionary keys, IDictionary values, IDictionary oldValues, DataSourceViewOperationCallback callback) +92 System.Web.UI.WebControls.GridView.HandleUpdate(GridViewRow row, Int32 rowIndex, Boolean causesValidation) +907 System.Web.UI.WebControls.GridView.HandleEvent(EventArgs e, Boolean causesValidation, String validationGroup) +704 System.Web.UI.WebControls.GridView.OnBubbleEvent(Object source, EventArgs e) +95 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.GridViewRow.OnBubbleEvent(Object source, EventArgs e) +123 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.LinkButton.OnCommand(CommandEventArgs e) +118

    Read the article

  • returning autorelease NSString still causes memory leaks

    - by hookjd
    I have a simple function that returns an NSString after decoding it. I use it a lot throughout my application, and it appears to create a memory leak (according to "leaks" tool) every time I use it. Leaks tells me the problem is on the line where I alloc the NSString that I am going to return, even though I autorelease it. Here is the function: -(NSString *) decodeValue { NSString *newString; newString = [self stringByReplacingOccurrencesOfString:@"#" withString:@"$"]; NSData *stateData = [NSData dataWithBase64EncodedString:newString]; NSString *convertState = [[[NSString alloc] initWithData:stateData encoding:NSUTF8StringEncoding] autorelease]; return convertState; } My understanding of [autorelease] is that it should be used in exactly this way... where I want to hold onto the object just long enough to return it in my function and then let the object be autoreleased later. So I believe I can use this function through code like this without manually releasing anything: NSString *myDecodedString = [myString decodeValue]; But this process is reporting leaks and I don't understand how to change it to avoid the leaks. What am I doing wrong?

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • SVN checkout or export for production environment?

    - by Eran Galperin
    In a project I am working on, we have an ongoing discussion amongst the dev team - should the production environment be deployed as a checkout from the SVN repository or as an export? The development environment is obviously a checkout, since it is constantly updated. For the production, I'm personally for checking out the main trunk, since it makes future updates easier (just run svn update). However some of the devs are against it, as svn creates files with the group/owner and permissions of the svn process (this is on a linux OS, so those things matter), and also having the .svn directories on the production seem to them to be somewhat dirty. Also, if it is a checkout - how do you push individual features to the production without including in-development code? do you use tags or branch out for each feature? any alternatives? EDIT: I might not have been clear - one of the requirement is to be able to constantly be able to push fixes to the production environment. We want to avoid a complete build (which takes much longer than a simple update) just for pushing critical fixes.

    Read the article

< Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >