Search Results

Search found 261 results on 11 pages for 'crawler'.

Page 10/11 | < Previous Page | 6 7 8 9 10 11  | Next Page >

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Printing a PDF results in garbled text (sometimes)

    - by Scott Whitlock
    We have a system that renders a report as a PDF, and displays it in the browser for the user. In the browser, the document always appears to display fine, but when printed on one machine, it sometimes changes some of the data in the report to seemingly random characters. Here are some examples of the strings it inserts: Ebuf; Bvhvt ul1: -!3122 Ti jqqf e!Wjb; Nfttf ohf s!Tf swjdf Additionally, the inter-character spacing is weird. It sometimes writes characters overlapping each other. I noticed some repetition in the garbled text, so I typed a few of them into Google, and surprisingly got a lot of hits. Here is the string I searched for: pdf cjmp ebuf nftf up! The Google search summaries contain the garbled text. However, when I click on those links in Google, I get perfectly readable PDF files. It's as if Google's PDF crawler has the same bug. Has anyone figured this out? Is this an Acrobat Reader bug?

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • why javamail fails with an authentication Exception ?

    - by saravana
    package com.bcs; import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class SendMailTLS { public static void main(String[] args) { String host = "smtp.gmail.com"; int port = 587; String username = "[email protected]"; String password = "bar"; Properties props = new Properties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); Session session = Session.getInstance(props); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport transport = session.getTransport("smtp"); transport.connect(host, port, username, password); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } } I have given all the necessary inputs . But still it fails with Exception in thread "main" java.lang.RuntimeException: javax.mail.AuthenticationFailedException: failed to connect, no password specified? at com.bcs.SendMailTLS.main(SendMailTLS.java:43) Caused by: javax.mail.AuthenticationFailedException: failed to connect, no password specified? at javax.mail.Service.connect(Service.java:329) at javax.mail.Service.connect(Service.java:176) at javax.mail.Service.connect(Service.java:125) at javax.mail.Transport.send0(Transport.java:194) at javax.mail.Transport.send(Transport.java:124) at com.bcs.SendMailTLS.main(SendMailTLS.java:38) I am new to java mail.Any help would be greatly appreciated.

    Read the article

  • FileNotFoundException when reading .xml file to parse

    - by thechiman
    I'm writing a program in Java where I read in data from an XML file and parse it. The file is imported into a folder named "Resources" in the src directory of my project. I'm using Eclipse. When I run the program, I get the following error: java.io.FileNotFoundException: /Users/thechiman/Dropbox/introcs/PSU SOC Crawler/resources/majors_xml_db.xml (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:106) at java.io.FileInputStream.<init>(FileInputStream.java:66) ... The relevant code is here: private void parseXML() { //Get a factory DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); try { //Use factory to get a new DocumentBuilder DocumentBuilder db = dbf.newDocumentBuilder(); //Parse the XML file, get DOM representation dom = db.parse("resources/majors_xml_db.xml"); } catch(ParserConfigurationException pce) { pce.printStackTrace(); } catch(SAXException se) { se.printStackTrace(); } catch(IOException ioe) { ioe.printStackTrace(); } } I do not understand why I'm getting the FileNotFoundException when the file is there. Thanks for the help.

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • multithreading issue

    - by vbNewbie
    I have written a multithreaded crawler and the process is simply creating threads and having them access a list of urls to crawl. They then access the urls and parse the html content. All this seems to work fine. Now when I need to write to tables in a database is when I experience issues. I have 2 declared arraylists that will contain the content each thread parse. The first arraylist is simply the rss feed links and the other arraylist contains the different posts. I then use a for each loop to iterate one while sequentially incrementing the other and writing to the database. My problem is that each time a new thread accesses one of the lists the content is changed and this affects the iteration. I tried using nested loops but it did not work before and this works fine using a single thread.I hope this makes sense. Here is my code: SyncLock dlock For Each l As String In links finallinks.Add(l) Next End SyncLock SyncLock dlock For Each p As String In posts finalposts.Add(p) Next End SyncLock ... Dim i As Integer = 0 SyncLock dlock For Each rsslink As String In finallinks postlink = finalposts.Item(i) i = i + 1 finallinks and finalposts are the two arraylists. I did not include the rest of the code which shows the threads working but this is the essential part where my error occurs which is basically here postlink = finalposts.Item(i) i = i + 1 ERROR: index was out of range. Must be non-negative and less than the size of the collection Is there an alternative?

    Read the article

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • database setup for web application

    - by vbNewbie
    I have an application that requires a database and I have already setup tables but not sure if they match the requirements of the app. The app is a crawler which fetches web urls, crawls and stores appropriate urls and posts and all this is based on client requests which are stored as projects. So for each url stored there is one post and for client there are many projects and for each project there are many types of requests. So we get a client with a request and assign them a project name and then use the request to search for content and store the url and post. A request could already exist and should not be duplicated but should be associated with the right client and project and post etc. Here is my schema now: url table: urlId PK queryId FK url post table: postId PK urlId FK post date request table: queryId PK request client table: clientId PK client Name projectId FK project table: projectID PK queryID FK project Does this look right? or does anyone have suggestions. Of course my stored procedures and insert statements will have to be in depth.

    Read the article

  • Apache HttpClient 4.0. Weird behavior.

    - by Mikhail T
    Hello. I'm using Apache HttpClient 4.0 for my web crawler. The behavior i found strange is: i'm trying to get page via HTTP GET method and getting response about 404 HTTP error. But if i try to get that page using browser it's done successfully. Details: 1. I upload multipart form to server this way: HttpPost httpPost = new HttpPost("http://[host here]/in.php"); MultipartEntity entity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); entity.addPart("method", new StringBody("post")); entity.addPart("key", new StringBody("223fwe0923fjf23")); FileBody fileBody = new FileBody(new File("photo.jpg"), "image/jpeg"); entity.addPart("file", fileBody); httpPost.setEntity(entity); HttpResponse response = httpClient.execute(httpPost); HttpEntity result = response.getEntity(); String responseString = ""; if (result != null) { InputStream inputStream = result.getContent(); byte[] buffer = new byte[1024]; while(inputStream.read(buffer) > 0) responseString += new String(buffer); result.consumeContent(); } Uppload succefully ends. I'm getting some results from web server: HttpGet httpGet = new HttpGet("http://[host here]/res.php?key="+myKey+"&action=get&id="+id); HttpResponse response = httpClient.execute(httpGet); HttpEntity entity = response.getEntity(); I'm getting ClientProtocolException while execute method run. I was debugging this situation with log4j. Server answers "404 Not Found". But my browser loads me that page with no problem. Can anybody help me? Thank you.

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • Javamail doesn't send a mail

    - by Jose Hdez
    I am developing a Java application and I am using Javamail to send a mail. My code is the following: Properties props = new Properties(); props.put("mail.smtp.host", "diana.cartif.es"); props.put("mail.smtp.socketFactory.port", "465"); props.put("mail.smtp.socketFactory.class","javax.net.ssl.SSLSocketFactory"); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.port", "465"); Session session = Session.getDefaultInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("alerts","pass"); } }); Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO,InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," +"\n\n No spam to my email, please!"); Transport.send(message); However when I execute this code it throws an Exception: javax.mail.MessagingException: Could not connect to SMTP host: diana.cartif.es, port: 465, response: -1 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1960) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:642) at javax.mail.Service.connect(Service.java:317) at javax.mail.Service.connect(Service.java:176) at javax.mail.Service.connect(Service.java:125) at javax.mail.Transport.send0(Transport.java:194) at javax.mail.Transport.send(Transport.java:124) at com.cartif.data.MainConnection.getFTPConnection(MainConnection.java:106) at com.cartif.main.Main.connectToServer(Main.java:72) at com.cartif.main.Main.main(Main.java:60) Data to connect is right because I checked it in my Mail Server. Could someone help me please? Thanks!

    Read the article

  • Need of optimized code for hide and show div in jQuery

    - by novellino
    Hello, I have a div: <div id="p1" class="img-projects" style="margin-left:0;"> <a href="project1.php"> <img src="image1.png"/></a> <div id="p1" class="project-title">Bar Crawler</div> </div> On mouse-over I want to add an image with opacity and make the project-title shown. So I use this code: <script type="text/javascript"> $(function() { $('.project-title').hide(); $('#p1.img-projects img').mouseover( function() { $(this).stop().animate({ opacity: 0.3 }, 800); $('#p1.project-title').fadeIn(500); }); $('#p1.img-projects img').mouseout( function() { $(this).stop().animate({ opacity: 1.0 }, 800); $('#p1.project-title').fadeOut(); }); $('#p2.img-projects img').mouseover( function() { $(this).stop().animate({ opacity: 0.3 }, 800); $('#p2.project-title').fadeIn(500); }); $('#p2.img-projects img').mouseout( function() { $(this).stop().animate({ opacity: 1.0 }, 800); $('#p2.project-title').fadeOut(); }); }); </script> The code works fine but does anyone know a way to optimize my code? Thank you

    Read the article

  • first major web app

    - by vbNewbie
    I have created a web app version of my previous crawler app and the initial form has controls to allow the client to make selections and start a search 'job'. These searches 'jobs' will be run my different threads created individually and added to a list to keep track of. Now the idea is to have another web form that will display this list of 'jobs' and their current status and will allow the jobs to be cancelled or removed only from the server side. This second form contains a grid to display these jobs. Now I have no idea if I should create the threads in the initial form code or send all user input to my main class which runs the search and if so how do I pass the the thread list to the second form to have it displayed on the grid. Any ideas really appreciated. Dim count As Integer = 0 Dim numThread As Integer = 0 Dim jobStartTime As Date Dim thread = New Thread(AddressOf ResetFormControlValues) 'StartBlogDiscovery) jobStartTime = Date.Now thread.Name = "Job" & jobStartTime 'clientName Session("Job") = "Job" & jobStartTime 'clientName thread.start() thread.sleep(50000) If numThread >= 10 Then For Each thread In threadlist thread.Join() Next Else numThread = numThread + 1 SyncLock threadlist threadlist.Enqueue(thread) End SyncLock End If this is the code that is called when the user clicks the search button on the inital form. this is what I just thought might work on the second web form if i used the session method. Try If Not Page.IsPostBack Then If Not Session("Job") = Nothing Then Grid1.DataSource = Session("Job") Grid1.DataBind() End If End If Finally

    Read the article

  • get_browser not working

    - by tazphoenix
    it's not working.i mean i have many scripts to get ip and os but anyway get_browser is internal function and should work but its not.when i try to get a print_r on the function i get. Array ( [browser_name_regex] => §^.*$§ [browser_name_pattern] => * [browser] => Default Browser [version] => 0 [majorver] => 0 [minorver] => 0 [platform] => unknown [alpha] => [beta] => [win16] => [win32] => [win64] => [frames] => 1 [iframes] => [tables] => 1 [cookies] => [backgroundsounds] => [cdf] => [vbscript] => [javaapplets] => [javascript] => [activexcontrols] => [isbanned] => [ismobiledevice] => [issyndicationreader] => [crawler] => [cssversion] => 0 [supportscss] => [aol] => [aolversion] => 0 ) I'm using win7 and firefox.

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • Crawling not working windows2008

    - by axtolf
    Hi, We installed a new MOSS 2007 farm on windows 2008 SP2 enviroment. We used SQL2008 too. Configuration is 1 index, 1 FE and 1 server with 2008, all on ESX 4.0. All the Service that need it uses a dedicated user, so search has a dedicated user. Installation went well and we found no problem. We installed an SP1 MOSS from a ISO and after we upgraded WSS and MOSS to SP2. We installed the Italian language pack too and patched it to SP2. We created a new SSP. We created a web application and created a root website under it. The problem is that we can't male crawling work in any way. Seems that crawling is not able to reach the web application that we want to crawl. In event viewer of the index we have this error when we try to crawl it: The start address <h..p://name.domain.it:81 cannot be crawled. Context: Application 'SSP1', Catalog 'Portal_Content' Details: The object was not found. (0x80041201) The log of crawling from the search admin, only says: h..p://name.domani.it:81 The object was not found. (The item was deleted because it was either not found or the crawler was denied access to it.) The domain is fully accessible from everywhere using both farm admin user or the search user that we are using for service to run. Site is fully accessible from the index and seem not have problem. Inside the we application we created a root site collection with a couple of file. The log of the farm simply says.... nothing! When we ask to do a full crawl of the site, it runs for a second and after we have the errors that I wrote above. But the farm's log says nothing. Any suggestion or help is really appreciated since we are losing a lot of time on it and really we do not have any idea of what's wrong about this farm.

    Read the article

  • How do I get a screenshot of a given website using C#

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Getting a screenshot of a page using C# - Need help with code

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Getting a screenshot of a page using .NET - Need help with code

    - by Ender
    I'm writing a specialized crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

< Previous Page | 6 7 8 9 10 11  | Next Page >