Search Results

Search found 2778 results on 112 pages for 'smartcard reader'.

Page 97/112 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • linux raw socket programming question

    - by user194420
    Hi all, I am trying to create a raw socket which send and receive message with ip/tcp header under linux. I can successfully binds to a port and receive tcp message(ie:syn) However, the message seems to be handled by the os, but not mine. I am just a reader of it(like wireshark). My raw socket binds to port 8888, and then i try to telnet to that port . In wireshark, it shows that the port 8888 reply a "rst ack" when it receive the "syn" request. In my program, it shows that it receive a new message and it doesnot reply with any message. Any way to actually binds to that port?(prevent os handle it) Here is part of my code, i try to cut those error checking for easy reading sockfd = socket(AF_INET, SOCK_RAW, IPPROTO_TCP); int tmp = 1; const int *val = &tmp; setsockopt (sockfd, IPPROTO_IP, IP_HDRINCL, val, sizeof (tmp)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8888); bind(sockfd, (struct sockaddr*)&servaddr, sizeof(servaddr)); //call recv in loop

    Read the article

  • datareader.close is called in if - else branching. How to validate datareader is actually closed usi

    - by tanmay
    Hi, I have written couple of custom rules in for fxcop 1.36. I have written a code to find weather opened datareader is closed or not. But it does not check which datareader object is calling the close() method so I can't be sure if all opened datareader objs are closed!! 2nd: if I am using data reader in IF else like if 1=2 dr = cmd.executeReader(); else dr = cmd2.execureReader(); end if in this case it will search for 2 datareader objects to be closed.. I am putting my code for more clarity. public override ProblemCollection Check(Member member) { Method method = member as Method; int countCatch =0; int countErrLog = 0; Instruction objInstr = null; if (method != null) { for (int i = 0; i < method.Instructions.Count; i++) { objInstr = method.Instructions[i]; if (objInstr.Value != null) { if (objInstr.Value.ToString().Contains("System.Data.SqlClient.SqlDataReader")) { countCatch += 1; } if (countCatch>0) { if (objInstr.Value.ToString().Contains("System.Data.SqlClient.SqlDataReader.Close")) { countErrLog += 1; } } } } } if (countErrLog!=countCatch) { Resolution resolu = GetResolution(new string[] { method.ToString() }); Problems.Add(new Problem(resolu)); } return Problems; Thanks and regards, Tanmay.

    Read the article

  • Stream PDF to another local App

    - by Nathan
    Hi, I'm currently trying to optimize a small firefox extension that will grab a pdf off the current document and send it to a port that another local application is listening on. Right now it uses a terrifying hackjob of cache viewer. The way I'm getting it is loading the cache, searching through it using the current URL and grabbing the file and saving it to a temp directory. Then I stream the file in, delete the temp, and send it through the socket. Now, my new design, ideally I'd want to build it from scratch and cut out saving it to the local machine at all, and just stream it through the socket. I've been looking at doing something like, //check page to ensure its a pdf //init in/out streams //stream through sock //flush Now, this would be vastly superior to the 400 line hacked up mess I have now, but I'm new to building FF extensions, and after reading a lot about URIs and the file streaming and such I'm probably more confused than when I started trying to fix this three hours ago. I'm okay with sending things through the sockets and whatnot, I understand that, I'm mainly confused about what multitude of interfaces I want to use. Gah! Thanks! Also, long time reader, first time poster!

    Read the article

  • Object reference not set to an instance of an object.

    - by Lambo
    I have the following code - private static void convert() { string csv = File.ReadAllText("test.csv"); string year = "2008"; XDocument doc = ConvertCsvToXML(csv, new[] { "," }); doc.Save("update.xml"); XmlTextReader reader = new XmlTextReader("update.xml"); XmlDocument testDoc = new XmlDocument(); testDoc.Load(@"update.xml"); XDocument turnip = XDocument.Load("update.xml"); webservice.singleSummary[] test = new webservice.singleSummary[1]; webservice.FinanceFeed CallWebService = new webservice.FinanceFeed(); foreach(XElement el in turnip.Descendants("row")) { test[0].account = el.Descendants("var").Where(x => (string)x.Attribute("name") == "account").SingleOrDefault().Attribute("value").Value; test[0].actual = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "actual").SingleOrDefault().Attribute("value").Value); test[0].commitment = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "commitment").SingleOrDefault().Attribute("value").Value); test[0].costCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "costCentre").SingleOrDefault().Attribute("value").Value; test[0].internalCostCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "internalCostCentre").SingleOrDefault().Attribute("value").Value; MessageBox.Show(test[0].account, "Account"); MessageBox.Show(System.Convert.ToString(test[0].actual), "Actual"); MessageBox.Show(System.Convert.ToString(test[0].commitment), "Commitment"); MessageBox.Show(test[0].costCentre, "Cost Centre"); MessageBox.Show(test[0].internalCostCentre, "Internal Cost Centre"); CallWebService.updateFeedStatus(test, year); } It is coming up with the error of - NullReferenceException was unhandled, saying that the object reference not set to an instance of an object. The error occurs on the first line test[0].account. How can I get past this?

    Read the article

  • ArrayList<Int> Collections.Sort and LineNumberReader Help How to

    - by user1819551
    I have a issue i can't get it to work now let going to the point a explain in the code thanks. This is my class: what I want to do is insert the Integers sort the list and buffer writer in a column with out coma. Now I getting this: [1110018, 1110032, 1110056, 1110059, 1110063, 1110085, 1110096, 1110123, 1110125, 1110185, 1110456, 1110459] I want like this: 111xxxxx 111xxxx xxxx....... I can't do it in single array, have to be in ArrayList. This is my collecting: list.addNumbers(numbers); list.display(); This is my writer: Is buffered coma.write("\n"+list.display()); coma.flush();<br/> Here is my class: public class IdCount {<br/> private ArrayList<Integer> properNumber = new ArrayList<>(); public void addNumbers(Integer numbers) { properNumber.add(numbers); Collections.sort(properNumber); } public String display() { //(I try .toString() Not work) return properNumber.toString(); } My second issue is LineNumberReader: This is my collecting and my writing: try { Reader input = new BufferedReader( new FileReader(inputFile)); try (Scanner in = new Scanner(input)) { while (in.hasNext()) { //(More Code) asp = new LineNumberReader(input); int rom = 0; while (asp.readLine()!=null){ rom++; } System.out.println(rom); coma.write(rom); This one will not write anything an my System Print give me only 12 0 in column.

    Read the article

  • Which pdf elements could cause crashes?

    - by Felixyz
    This is a very general question but it's based on a specific problem. I've created a pdf reader app for the iPad and it works fine except for certain pdf pages which always crash the app. We now found out that the very same pages cause Safari to crash as well, so as I had started to suspect the problem is somewhere in Apple's pdf rendering code. From what I have been able to see, the crashing pages cause the rendering libraries to start allocating memory like mad until the app is killed. I have nothing else to help me pinpoint what triggers this process. It doesn't necessarily happen with the largest documents, or the ones with the most shapes. In fact, we haven't found any parameter that helps us predict which pages will crash and which not. Now we just discovered that running the pages through a consumer program that lets you merge docs gets rid of the problem, but I haven't been able to detect which attribute or element it is that is the key. Changing documents by hand is also not an option for us in the long run. We need to run an automated process on our server. I'm hoping someone with deeper knowledge about the pdf file format would be able to point me in a reasonable direction to look for document features that could cause this kind of behavior. All I've found so far is something about JBIG2 images, and I don't think we have any of those.

    Read the article

  • Extracting noun+noun or (adj|noun)+noun from Text

    - by ssuhan
    I would like to query if it is possible to extract noun+noun or (adj|noun)+noun in R package openNLP?That is, I would like to use linguistic filtering to extract candidate noun phrases. Could you direct me how to do? Many thanks. Thanks for the responses. here is the code: library("openNLP") acq <- "Gulf Applied Technologies Inc said it sold its subsidiaries engaged in pipeline and terminal operations for 12.2 mln dlrs. The company said the sale is subject to certain post closing adjustments, which it did not explain. Reuter." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ") acqTagSplit qq = 0 tag = 0 for (i in 1:length(acqTagSplit[[1]])){ qq[i] <-strsplit(acqTagSplit[[1]][i],'/') tag[i] = qq[i][[1]][2] } index = 0 k = 0 for (i in 1:(length(acqTagSplit[[1]])-1)) { if ((tag[i] == "NN" && tag[i+1] == "NN") | (tag[i] == "NNS" && tag[i+1] == "NNS") | (tag[i] == "NNS" && tag[i+1] == "NN") | (tag[i] == "NN" && tag[i+1] == "NNS") | (tag[i] == "JJ" && tag[i+1] == "NN") | (tag[i] == "JJ" && tag[i+1] == "NNS")){ k = k +1 index[k] = i } } index Reader can refer index on acqTagSplit to do noun+noun or (adj|noun)+noun extractation. (The code is not optimum but work. If you have any idea, please let me know.) Furthermore, I still have a problem. Justeson and Katz (1995) proposed another linguistic filtering to extract candidate noun phrases: ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun I cannot well understand its meaning, could someone do me a favor to explain it or transform such representation into R language

    Read the article

  • pdf external streams in Max OS X Preview

    - by olpa
    According to the specification, a part of a PDF document can reside in an external file. An example for an image: 2 0 obj << /Type /XObject /Subtype /Image /Width 117 /Height 117 /BitsPerComponent 8 /Length 0 /ColorSpace /DeviceRGB /FFilter /DCTDecode /F (pinguine.jpg) >> stream endstream endobj I found that this functionality does work in Adobe Acrobat 5.0 for Windows (sample PDF with the image), also I managed to view this file in Adobe Acrobat Reader 8.1.3 for Mac OS X after I found the setting "Allow external content". Unfortunately, it seems that non-Adobe tools ignore the external stream feature. I hope I'm wrong, therefore ask the question: How to enable external streams in Mac OS X? (I think that all the system Mac OS X tools use the same library, therefore say "Mac OS X" instead of "Preview".) Or maybe there could be a programming hook to emulate external streams? My task is: store a big set of images (total ˜300Mb) outside of a small PDF (˜1Mb). At some moment, I want to filter PDF through a quartz filter and get a PDF with the images embedded. Any suggestions are welcome.

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Java Regex for matching hexadecimal numbers in a file

    - by Ranman
    So I'm reading in a file (like java program < trace.dat) which looks something like this: 58 68 58 68 40 c 40 48 FA If I'm lucky but more often it has several whitespace characters before and after each line. These are hexadecimal addresses that I'm parsing and I basically need to make sure that I can get the line using a scanner, buffered reader... whatever and make sure I can then convert the hexadecimal to an integer. This is what I have so far: Scanner scanner = new Scanner(System.in); int address; String binary; Pattern pattern = Pattern.compile("^\\s*[0-9A-Fa-f]*\\s*$", Pattern.CASE_INSENSITIVE); while(scanner.hasNextLine()) { address = Integer.parseInt(scanner.next(pattern), 16); binary = Integer.toBinaryString(address); //Do lots of other stuff here } //DO MORE STUFF HERE... So I've traced all my errors to parsing input and stuff so I guess I'm just trying to figure out what regex or approach I need to get this working the way I want.

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • Lucene DuplicateFilter question

    - by chardex
    Hi, Why DuplicateFilter doesn't work together with other filters? For example, if a little remake of the test DuplicateFilterTest, then the impression that the filter is not applied to other filters and first trims results: public void testKeepsLastFilter() throws Throwable { DuplicateFilter df = new DuplicateFilter(KEY_FIELD); df.setKeepMode(DuplicateFilter.KM_USE_LAST_OCCURRENCE); Query q = new ConstantScoreQuery(new ChainedFilter(new Filter[]{ new QueryWrapperFilter(tq), // new QueryWrapperFilter(new TermQuery(new Term("text", "out"))), // works right, it is the last document. new QueryWrapperFilter(new TermQuery(new Term("text", "now"))) // why it doesn't work? It is the third document. }, ChainedFilter.AND)); ScoreDoc[] hits = searcher.search(q, df, 1000).scoreDocs; assertTrue("Filtered searching should have found some matches", hits.length > 0); for (int i = 0; i < hits.length; i++) { Document d = searcher.doc(hits[i].doc); String url = d.get(KEY_FIELD); TermDocs td = reader.termDocs(new Term(KEY_FIELD, url)); int lastDoc = 0; while (td.next()) { lastDoc = td.doc(); } assertEquals("Duplicate urls should return last doc", lastDoc, hits[i].doc); } }

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • How do I fully reload my Sencha Touch App when tapping 'refresh'?

    - by torr
    My list view layout is similar to Pinterest, with absolutely positioned blocks. Unfortunately, this seems to require a full re-initialization at refresh. If reload only the store (as below) the new blocks are incorrectly positioned. How do I reload the app when the user clicks on refresh? This is my View: Ext.require(['Ext.data.Store', 'MyApp.model.StreamModel'], function() { Ext.define('MyApp.view.HomeView', { extend: 'Ext.navigation.View', xtype: 'homepanel', requires: [ 'Ext.dataview.List', ], config: { title: 'Home', iconCls: 'home', styleHtmlContent: true, navigationBar: { items: [ { xtype: 'button', iconMask: true, iconCls: 'refresh', align: 'left', action: 'refreshButton' } ] }, items: { title: 'My', xtype: 'list', itemTpl: [ '<div class="post">', ... '</div>' ].join(''), store: new Ext.data.Store({ model: 'MyApp.model.StreamModel', autoLoad: true, storeId: 'stream' }), } } }); }); Model: Ext.define('MyApp.model.StreamModel', { extend: 'Ext.data.Model', config: { fields: [ 'post_id', 'comments' ], proxy: { type: 'jsonp', url: 'http://My.com/app', reader: { type: 'json', } } } }); and my Controller: Ext.define('MyApp.controller.RefreshController', { extend: 'Ext.app.Controller', requires: [ 'Ext.navigation.View' ], config: { control: { 'button[action=refreshButton]': { tap: 'refresh' } } }, refresh: function() { // Ext.StoreMgr.get('stream').load(); // here I'd like to reload the app instead // not just the store } });

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • Robust DateTime parser library for .NET

    - by Frank Krueger
    Hello, I am writing an RSS and Mail reader app in C# (technically MonoTouch). I have run into the issue of parsing DateTimes. I see a lot of variance in how dates are presented in the wild and have begun writing a function like this: public static DateTime ParseTime(string timeStr) { var formats = new string[] { "ddd, d MMM yyyy H:mm:ss \"GMT+00:00\"", "d MMM yyyy H:mm:ss \"EST\"", "yyyy-MM-dd\"T\"HH:mm:ss\"Z\"", "ddd MMM d HH:mm:ss \"+0000\" yyyy", }; try { return DateTime.Parse(timeStr); } catch (Exception) { } foreach (var f in formats) { try { var t = DateTime.ParseExact(timeStr, f, CultureInfo.InvariantCulture); return t; } catch (Exception) { } } return DateTime.MinValue; } This, well, makes me sick. Three points. (1) It's silly of me to think that I can actually collect a format list that will cover everything out there. (2) It's wrong! Notice that I'm treating an EST date time as UTC (since .NET seems oblivious to time zones). (3) I don't like using exceptions for logic. I am looking for an existing library (source only please) that is known to handle a bunch of these formats. Also, I would like to keep using UTC DateTimes throughout my code so whatever library is suggested should be able to produce DateTimes. Is there anything out there like this?

    Read the article

  • File streaming in PHP - How to replicate this C#.net code in PHP?

    - by openid_kenja
    I'm writing an interface to a web service where we need to upload configuration files. The documentation only provides a sample in C#.net which I am not familiar with. I'm trying to implement this in PHP. Can someone familiar with both languages point me in the right direction? I can figure out all the basics, but I'm trying to figure out suitable PHP replacements for the FileStream, ReadBytes, and UploadDataFile functions. I believe that the RecService object contains the URL for the web service. Thanks for your help! private void UploadFiles() { clientAlias = “<yourClientAlias>”; string filePath = “<pathToYourDataFiles>”; string[] fileList = {"Config.txt", "ProductDetails.txt", "BrandNames.txt", "CategoryNames.txt", "ProductsSoldOut.txt", "Sales.txt"}; RecommendClient RecService = new RecommendClient(); for (int i = 0; i < fileList.Length; i++) { bool lastFile = (i == fileList.Length - 1); //start generator after last file try { string fileName = filePath + fileList[i]; if (!File.Exists(fileName)) continue; // file not found } // set up a file stream and binary reader for the selected file and convert to byte array FileStream fStream = new FileStream(fileName, FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fStream); byte[] data = br.ReadBytes((int)numBytes); br.Close(); // pass byte array to the web service string result = RecService.UploadDataFile(clientAlias, fileList[i], data, lastFile); fStream.Close(); fStream.Dispose(); } catch (Exception ex) { // log an error message } } }

    Read the article

  • AM I using DO WHILE NOT and EOF in VBscript Properly

    - by Derek Drummond
    This block of code is causing my page to fail to load. When I comment out the line `DO WHILE NOT Rs.EOF the page loads properly. SQL_Command_String = "SELECT * FROM Seminars WHERE [SeminarID] = 5 ORDER BY DESC" Rs = SQLConnection.Execute(SQL_Command_String) DO WHILE NOT Rs.EOF file1 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & Rs("Year") & "_" & Rs("Last") & ".pdf" file2 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & seminar_year & "_" & Rs("Last") & "(handouts).pdf" file3 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & seminar_year & "_" & Rs("Last") & "_Flyer.pdf" Rs.MoveNext loop Is the SQL_Command_String an invalid SQL Command or is my reader not being used properly when I try to build that specific file path e.g. Rs("Date")? Is there an option where I could do something like: DO WHILE RS.hasNextLine

    Read the article

  • how do i insert html file that have hebrew text and read it back in sql server 2008 using the filest

    - by gadym
    hello all, i am new to the filestream option in sql server 2008, but i have already understand how to open this option and how to create a table that allow you to save files. let say my table contains: id,name, filecontent i tried to insert an html file (that has hebrew chars/text in it) to this table. i'm writing in asp.net (c#), using visual studio 2008. but when i tried to read back the content , hebrew char becomes '?'. the actions i took were: 1. i read the file like this: // open the stream reader System.IO.StreamReader aFile = new StreamReader(FileName, System.Text.UTF8Encoding.UTF8); // reads the file to the end stream = aFile.ReadToEnd(); // closes the file aFile.Close(); return stream; // returns the stream i inserted the 'stream' to the filecontent column as binary data. i tried to do a 'select' to this column and the data did return (after i coverted it back to string) but hebrew chars become '?' how do i solve this problem ? what I should pay attention to ? thanks, gadym

    Read the article

  • Java Trying to get a line of source from a website

    - by dsta
    I'm trying to get one line of source from a website and then I'm returning that line back to main. I keep on getting an error at the line where I define InputStream in. Why am I getting an error at this line? public class MP3LinkRetriever { private static String line; public static void main(String[] args) { String link = "www.google.com"; String line = ""; while (link != "") { link = JOptionPane.showInputDialog("Please enter the link"); try { line = Connect(link); } catch(Exception e) { } JOptionPane.showMessageDialog(null, "MP3 Link: " + parseLine(line)); String text = line; Toolkit.getDefaultToolkit( ).getSystemClipboard() .setContents(new StringSelection(text), new ClipboardOwner() { public void lostOwnership(Clipboard c, Transferable t) { } }); JOptionPane.showMessageDialog(null, "Link copied to your clipboard"); } } public static String Connect(String link) throws Exception { String strLine = null; InputStream in = null; try { URL url = new URL(link); HttpURLConnection uc = (HttpURLConnection) url.openConnection(); in = new BufferedInputStream(uc.getInputStream()); Reader re = new InputStreamReader(in); BufferedReader r = new BufferedReader(re); int index = -1; while ((strLine = r.readLine()) != null && index == -1) { index = strLine.indexOf("<source src"); } } finally { try { in.close(); } catch (Exception e) { } } return strLine; } public static String parseLine(String line) { line = line.replace("<source", ""); line = line.replace(" src=", ""); line = line.replace("\"", ""); line = line.replace("type=", ""); line = line.replace("audio/mpeg", ""); line = line.replace(">", ""); return line; } }

    Read the article

  • Extjs - Loading Grid when call

    - by Oxi
    I have form and grid. the user must enter data in form fields then display related records in the grid. I want to implement a search form, e.g: user will type the name and gender of the student, then will get a grid of all students have the same name and gender. So, I use ajax to send form fields value to PHP and then creat a json_encode wich will be used in grid store. I am really not sure if my idea is good. But I haven't found another way to do that. The problem is when I set autoLoad to true in the store, the grid automatically filled with all data - not just what I asked for - So, I understand that I have to set autoLoad to false, but then the result not shown in the grid even it returned successfully in the firebug! I don't know what to do. My View: { xtype: 'panel', layout: "fit", id: 'searchResult', flex: 7, title: '<div style="text-align:center;"/>SearchResultGrid</div>', items: [ { xtype: 'gridpanel', store: 'advSearchStore', id: 'AdvSearch-grid', columns: [ { xtype: 'gridcolumn', dataIndex: 'name', align: 'right', text: 'name' }, { xtype: 'gridcolumn', dataIndex: 'gender', align: 'right', text: 'gender' } ], viewConfig: { id : 'Arr' ,emptyText: 'noResult' }, requires: ['MyApp.PrintSave_toolbar'], dockedItems: [ { xtype: 'PrintSave_tb', dock: 'bottom', } ] } ] }, My Store and Model: Ext.define('AdvSearchPost', { extend: 'Ext.data.Model', proxy: { type: 'ajax', url: 'AdvSearch.php', reader: { type: 'json', root: 'Arr', totalProperty: 'totalCount' } }, fields: [ { name: 'name'}, { name: 'type_and_cargo'} ] }); Ext.create('Ext.data.Store', { pageSize: 10, autoLoad: false, model: 'AdvSearchPost', storeId: 'AdvSearchPost' });

    Read the article

  • Accept All Cookies via HttpClient

    - by Vinay
    So this is currently how my app is set up: 1.) Login Activity. 2.) Once logged in, other activities may be fired up that use PHP scripts that require the cookies sent from logging in. I am using one HttpClient across my app to ensure that the same cookies are used, but my problem is that I am getting 2 of the 3 cookies rejected. I do not care about the validity of the cookies, but I do need them to be accepted. I tried setting the CookiePolicy, but that hasn't worked either. This is what logcat is saying: 11-26 10:33:57.613: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0] [name: cookie_user_id][value: 1][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" 11-26 10:33:57.593: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0][name: cookie_session_id][value: 1985208971][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" I am sure that my actual code is correct (my app still logs in correctly, just doesn't accept the aforementioned cookies), but here it is anyway: HttpGet httpget = new HttpGet(//MY URL); HttpResponse response; response = Main.httpclient.execute(httpget); HttpEntity entity = response.getEntity(); InputStream in = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder sb = new StringBuilder(); From here I use the StringBuilder to simply get the String of the response. Nothing fancy. I understand that the reason my cookies are being rejected is because of an "Illegal path attribute" (I am running a script at /mobile-api/login.php whereas the cookie will return with a path of just "/" for trackallthethings), but I would like to accept the cookies anyhow. Is there a way to do this?

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Apache Lucene: Is Relevance Score Always Between 0 and 1?

    - by Eamorr
    Greetings, I have the following Apache Lucene snippet that's giving me some nice results: int numHits=100; int resultsPerPage=100; IndexSearcher searcher=new IndexSearcher(reader); TopScoreDocCollector collector=TopScoreDocCollector.create(numHits,true); Query q=parser.parse(queryString); searcher.search(q,collector); ScoreDoc[] hits=collector.topDocs(0*resultsPerPage,resultsPerPage).scoreDocs; Results r=new Results(); r.length=hits.length; for(int i=0;i<hits.length;i++){ Document doc=searcher.doc(hits[i].doc); double distanceKm=getGreatCircleDistance(lucene2double(doc.get("lat")), lucene2double(doc.get("lng")), Double.parseDouble(userLat), Double.parseDouble(userLng)); double newRelevance=((1/distanceKm)*Math.log(hits[i].score)/Math.log(2))*(0-1); System.out.println(hits[i].doc+"\t"+hits[i].score+"\t"+doc.get("content")+"\t"+"Km="+distanceKm+"\trlvnc="+String.valueOf(newRelevance)); } What I want to know, is hits[i].score always between 0 and 1? It seems that way, but I can't be sure. I've even checked the Lucene documentation (class ScoreDocs) to no avail. You'll see I'm calculating the log of the "newRelevance" value, which is based on hits[i].score. I need hits[i].score to be between 0 and 1, because if it is below zero, I'll get an error; above 1 and the sign will change from negative to positive. I hope some Lucene expert out there can offer me some insight. Many thanks,

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >