Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 796/1026 | < Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >

  • Best way to store data for Greasemonkey based crawler?

    - by Björn
    I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue. What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages. My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values. I am not so happy with the storage option, though. Maybe there is a better way? I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.

    Read the article

  • Zip up groups of webpages for viewing in the browser

    - by Arlen Beiler
    I think there should be a standard for saving and viewing bunches of webpages as a website. For instance, say I have a whole bunch of pages, such as I get from the WordPress plugin "Really Static" (which saves the entire site), and I have all the links start with a slash (to make linking to supporting files easier). Now, I can't really use those links if I am reading it from the file system. If there would be a standard where we could zip up files, give them a unique extension (like "hzip" for html zip), and open the file with any browser, which would display it as though the root of that file were the root of the pages. "file://examplefile.hzip/" The links would then all work. This would really help sharing and copying groups of webpages. Is this a good idea? A bad one? What do you think?

    Read the article

  • Apache security for multi-user development web server.

    - by mrmartinblue
    I've been searching and reading through documents all morning and understand that I need to use some combination of chown and probably 'jailing' to securely give programmers access to directories on my centos webserver. Here's the situation: I have an apache web server that has any number of virtual sites located in /var/www/site1 /var/www/site2 etc.. I have different developers that need full access both ssh and vsFTP to only the site they are working on. What is the best way to create and maintain security in this scenario. My thought would be to create a new user for each coder, jail that user to the website directory they are allowed to work in, add their user to a group and set the webroot's owner to that group. Any thoughts? Good, bad, ugly? Thanks!

    Read the article

  • Difficulty determining the file type of text database file

    - by Joseph Silvashy
    So the USDA has some weird database of general nutrition facts about food, and well naturally we're going to steal it for use in our app. But anyhow the format of the lines is like the following: ~01001~^~0100~^~Butter, salted~^~BUTTER,WITH SALT~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01002~^~0100~^~Butter, whipped, with salt~^~BUTTER,WHIPPED,WITH SALT~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01003~^~0100~^~Butter oil, anhydrous~^~BUTTER OIL,ANHYDROUS~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 ~01004~^~0100~^~Cheese, blue~^~CHEESE,BLUE~^~~^~~^~Y~^~~^0^~~^6.38^4.27^8.79^3.87 With those odd ~ and ^ separating the values, It also lacks a header row but thats ok, I can figure that out from the other stuff on their site: http://www.ars.usda.gov/Services/docs.htm?docid=8964 Any help would be great! If it matters we're making an open/free API with Ruby to query this data. Additionally I'm having a tough time posing this question so I've made it a community wiki so we can all pitch in!

    Read the article

  • Create a Sin wave line with Processing

    - by Nintari
    Hey everybody, first post here, and probably an easy one. I've got the code from Processing's reference site: float a = 0.0; float inc = TWO_PI/25.0; for(int i=0; i<100; i=i+4) { line(i, 50, i, 50+sin(a)*40.0); a = a + inc; } http://processing.org/reference/sin_.html However, what I need is a line that follows the curve of a Sin wave, not lines representing points along the curve and ending at the 0 axis. So basically I need to draw an "S" shape with a sin wave equation. Can someone run me through how to do this? Thank you in advance, -Askee

    Read the article

  • TFS 2010 Sharepoint Web Part error

    - by Shane
    I downloaded: Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 Beta 2 for Microsoft® Virtual PC 2007 SP1 Image When I go into the default installation of TFS project site in Sharepoint ( http://vs2010beta2/sites/DefaultCollection/IBuySpy/Dashboards/ProjectDashboard_wss.aspx ) I get this error on the webparts. There are no accessible team projects in this Team Project Collection. Contact your Team Foundation Server administrator. In the Team Server Admin Console - Team Project Collection - Team Projects tab the projects are there. This is the default installation. Any suggestions???

    Read the article

  • Manually logging in a user without password

    - by Agos
    Hi everybody; I hope you can help me figure the best way to implement a manual (server-side initiated) login without using the password. Let me explain the workflow: User registers Thank you! An email with an activation link has been sent blablabla (Account now exists but is marked not enabled) User opens email, clicks link (Account is enabled) Thank you! You can now use the site What I'm trying to do is log in the user after he has clicked the email link so he can start using the website right away. I can't use his password since it's encrypted in the DB, is the only option writing a custom authentication backend?

    Read the article

  • is this a secure approach in ActiveRecords in Rails?

    - by Adnan
    Hello, I am using the following for my customers to unsubscribe from my mailing list; def index @user = User.find_by_salt(params[:subscribe_code]) if @user.nil? flash[:notice] = "the link is not valid...." render :action => 'index' else Notification.delete_all(:user_id => @user.id) flash[:notice] = "you have been unsubscribed....." redirect_to :controller => 'home' end end my link looks like; http://site.com/unsubscribe/32hj5h2j33j3h333 so the above compares the random string to a field in my user table and accordingly deletes data from the notification table. My question; is this approach secure? is there a better/more efficient way for doing this? All suggestions are welcome.

    Read the article

  • Which rdfa parser for java that supports currently used rdfa attributes?

    - by lennyks
    I am building an app in Java using Jena for semantic information scraping. I am looking for a RDFa parser that would allow me to correctly extract all the rdfa statements. Specifically, one that extracts info about namespaces used and presuming that rdfa tags are correct in the page produces correct triples, ones that distinguish between object and data properties. I went through all rdfa parsers from the site http://rdfa.info/wiki/Consume for Java. They all struggle to extract any rdfa statements and if they do not crash, Jena RDFa parser shows plenty of errors and then dies a terrible death, the data is of little use as it is incorrectly processed and generally mixed up. I am newbie in this area so please be gentle:) I was also thinking of using a library written in different language but then again I don't really know how to plug it into Java code. Any suggestions?

    Read the article

  • How do you enable block folding for Python comments in TextMate?

    - by Dave Gallagher
    In TextMate 1.5.10 r1623, you get little arrows that allow you to fold method blocks: Unfortunately, if you have a multi-lined Python comment, it doesn't recognize it, so you can't fold it: def foo(): """ How do I fold these comments? """ print "bar" TextMate has this on their site on how to customize folding: http://manual.macromates.com/en/navigation_overview#customizing_foldings ...but I'm not skilled in regex enough to do anything about it. TextMate uses the Oniguruma regex API, and I'm using the default Python.tmbundle updated to the newest version via GetBundles. Does anyone have an idea of how to do this? Thanks in advance for your help! :)

    Read the article

  • Facebook canvas app showing wordpress blog in iframe - how to link to specific pages?

    - by James Olney
    Sorry for the cumbersome title. I have a Facebook canvas app that is call up a Wordpress site in an iFrame. This is fine and works as hoped but is there a way to link directly to a page within it? For example the page I want to reach is actually: www.blog.com/monkeys/ But I want to be able to use a link like this: www.facebook.com/monkeyblogapp/app_243495067635997/monkeys/ So that I can give out that URL and get people to see the right bit of content but within the Facebook environment. I have seen people mention callback urls (like this one which as far as I can tell just means having the app addresses match the website Direct links to pages in Facebook iFrame application?) but that doesn't work. Any suggestions? many thanks James

    Read the article

  • Are google chrome extension "content" scripts sandboxed?

    - by jabapyth
    I was under the impression that the content_scripts were executed right on the page, but it now seems as though there's some sandboxing going on. I'm working on an extension to log all XHR traffic of a site (for debugging and other development purposes), and in the console, the following sniff code works: var o = window.XMLHttpRequest.prototype.open; window.XMLHttpRequest.prototype.open = function(){ console.log(arguments, 'open'); return o.apply(this, arguments); }; console.log('myopen'); console.log(window, window.XMLHttpRequest, window.XMLHttpRequest.prototype, o, window.XMLHttpRequest.prototype.open); This logs a message everytime an XHR is sent. When I put this in an extension, however, the real prototype doesn't get modified. Apparently the window.XMLHttpRequest.prototype that my script is seeing differs from that of the actual page. Is there some way around this? Also, is this sandboxing behavior documented anywhere? I looked around, but couldn't find anything.

    Read the article

  • How can I asynchronously monitor a file in Perl?

    - by Hussain
    I am wondering if it is possible, and if so how, one could create a perl script that constantly monitors a file/db, and then call a subroutine to perform text processing if the file is changed. I'm pretty sure this would be possible using sockets, but this needs to be used for a webchat application on a site running on a shared host, and I'm not so sure sockets would be allowed on it. The basic idea is: create a listener for a chat file/database when the file is updated with a new message, call a subroutine the called subroutine will send the new message back to the browser to be displayed Thanks in advance.

    Read the article

  • Using jQuery ajax response data

    - by Theopile
    Hi again, I am using ajax post and am receiving data in the form of html. I need to split up the data and place pieces of the data all over the page. I built my response data to be something like <p id='greeting'> Hello there and Welcome </p> <p id='something'>First timer visiting our site eh'</p> It is a little more complicated and dynamic but I can figure it out if get this question answered. Thanks $.ajax({ type:'POST', url: 'confirm.php', data: "really=yes&sure=yes", success:function(data){ //Need to split data here } });

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Which svn client to install on Windows 7 machine?

    - by user246114
    I just got a new PC running Windows 7 (64-bit). I'd like to install an SVN client (command line only, I don't want TortoiseSVN). I'm not sure which of these to install: http://subversion.apache.org/packages.html#windows does anyone have any opinions on this? I tried going for the ones hosted by Tigris, but the downloaded zip says to read an install file hosted at their site, but the link is broken. Do we simply download, then call svn.exe as needed, no need for a real 'install'?

    Read the article

  • Odd C interview question

    - by Brennan Vincent
    Hi guys. I found this problem on a site full of interview questions, and was stumped by it. Is there some preprocessor directive that allows one to read from standard input during compilation? Write a small C program, which while compiling takes another program from input terminal, and on running gives the result for the second program. (NOTE: The key is, think UNIX). Suppose, the program is 1.c Then, while compiling $ cc -o 1 1.c int main() { printf("Hello World\n"); } ^D $ ./1 Hello World

    Read the article

  • How does one suppress a 404 status code in a page?

    - by songdogtech
    I've got a WordPress site that includes pages pulled from a different database. The problem is that these other pages return a 404 status code. (The WordPress posts/pages are fine.) The 404'ed pages display fine, and I removed the "Page not Found" text from the title tag in WordPress. But Googlebot and W3C see the 404 header. So: wow does one tell Apache to suppress a 404 status? And will Apache override WordPress's 404 header? Does that make sense? What other info and things should I be looking at? Can I suppress the status code in .htaccess so I don't change WP core files?

    Read the article

  • Is it a bad idea to have a login dialog inside an iframe?

    - by AyKarsi
    We're creating a website where we will be giving out code snippets to our users which they can place on their own websites. These snippets contain a link a javascript include. When clicking the link, an iframe containing the login dialog to our site opens. The user then authenticates inside the iframe, does his work and when he leaves the iframe his session is closed. We've got it working allready and it's very slick. Our main concern though is phishing. The user has absolutely now way of veryifying where the login page is really coming from. On the other hand, phising attacks are also succesfull even if the user can see the fake-url in the address bar. Would you enter your (OpenId) credentials in an iframe? Does anyone know a pattern with which we could minimise the chances of a phishing attack?

    Read the article

  • Software protection

    - by anfono
    I want to protect my software from being used without permission. I will provide it for free to the parties I authorize to use it. Anyone knows a good protection scheme against having it copied and run by unauthorized parties ? So far, I thought about introducing a key validation mechanism: periodically, the user needs to send me (web site query) a code based on which I generate a new code that app validates against. There is an initial code, and so I can track users... Thoughts ? Later edit: I changed the licensing part to avoid unfocused discussion.

    Read the article

  • jquery $(window).width() and $(window).height() return different values when viewport has not been r

    - by Manca Weeks
    I am writing a site using jquery that repeatedly repeatedly calls $(window).width() and $(window).height() to position and size elements based on the viewport size... In troubleshooting I discovered that I am getting slightly different viewport size reports in repeated calls to the above jquery functions when the viewport is not resized... Wondering if there is any special case anyone knows of when this happens, or if this is just the way it is. The difference in sizes reported are 20px or less, it appears. It happens in Safari 4.0.4, Firefox 3.6.2 and Chrome 5.0.342.7 beta on Mac OS X 10.6.2... I didn't test other browsers yet because it doesn't appear to be specific to the browser. I was also unable to figure out what the difference depends upon - if it isn't the viewport size, could there be another factor that makes the results differ? Any insight would be appreciated... Thanks MAnca

    Read the article

  • JS dynamic img change and SEO

    - by Gusepo
    Hi all, I've built a web site using jquery to make nice transitions between content. The code works this way: there are 2 imgs (body and footer) when I click on a link (instead of going to another page) I fade out the 2 imgs and change the src attribute of the 2. When the new imgs are loaded I fade them back in. I'm using SWFaddress to allow user go directly to internal content. Now I'd like to make my content indexed by google and other Search engines, all the text content is inside the imgs, So I've got the text in ALT attribute. My question is: if a dinamically change the imgs ALT attribute using JS, will spiders be able to read it properly? consider that I'm using SWFaddress to create a sitemap.. Thanks

    Read the article

  • can't click on input element behind div element

    - by Preston
    On my site: http://alsite.com.br/solalev/ I have some elements on the bottom of page that I can't click through. Above the elements is a div called push.. I use this div to make the footer always stay on the bottom of my page even when the content is smaller... (I dont know if I do this right.. but it has worked).. So.. on Chrome and Firefox I can't click.. but on IE this works.... I use this: .push{ pointer-events: none; } but nothing happens...

    Read the article

  • CSS Issue in Firefox/IE

    - by bethhilson
    I'm working on a site's CSS and am running across an issue with the body margin section. If you look at this in Firefox and then IE, you can see the line isn't lined up right in Firefox, but it is in IE. (In the black header section). Here is what I have for the body tag, It's something with the margin and I can't figure it out: body { margin: -2px; padding: 0px; background: #E7E7E7 url(images/bg01.jpg) repeat-x left top; font-family: Arial, Helvetica, sans-serif; font-size: 11px; color: #888888; } Thank you for any responses!

    Read the article

  • Is there a security issue with using javascript to manipulate cookies?

    - by Scarface
    Hey guys, another quick question for the experts. I have an alert box that displays updates processed in php to the user just like this site. I want to make it so that if the user closes the box, then it will not pop up for another 5 minutes (unless they check the messages then it will not pop up because the entries that cause the pop up are deleted in the database). On the close of the box I was thinking of giving the user a javascript cookie, since the alert box is done in javascript. I was wondering if this was a bad coding practice, since I am kind of unfamiliar with cookies and was warned against them before. If anyone has any advice or can recommend a better way, I would really appreciate it.

    Read the article

< Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >