Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 184/240 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Programmatically open an email from a POP3 and extract an attachment

    - by Josh
    We have a vendor that sends CSV files as email attachments. These CSV files contain statuses that are imported into our application. I'm trying to automate the process end-to-end, but it currently depends on someone opening an email, saving the attachment to a server share, so the application can use the file. Since I cannot convince the vendor to change their process, such as offering an FTP location or a Web Service, I'm stuck with trying to automate the existing process. Does anyone know of a way to programmatically open an email from a POP3 account and extract an attachment? The preferred solution would reside on a Windows 2003 server, be written VB.NET and secure. The application can reside on the same server as the POP3 server, for example, we could setup the free POP3 server that comes with Windows Server and pull against the mail file stored on the file system. BTW, we are willing to pay for an off-the-shelf solution, if one exists. Note: I did look at this question but the answer points to a CodeProject solution that doesn't deal with attachments.

    Read the article

  • Deploying .NET COM dll, getting error (0x80070002)

    - by Brett
    I have a .NET COM assembly I am attempting to deploy to a web server (IIS 6 Win 2003). We have successfully deployed this assembly to our test environment, but the production environment is not working. The assembly is being called from a classic ASP page. Every time that page tries to initialize the assembly with “Set LTMRender = CreateObject("LTMRender.Render")”, I get an error “Error Type:, (0x80070002)”. This error seems to indicate a permission denied, or file not found type problem. I created a test app to see if the assembly works outside of the web page. The .exe initializes the assembly, and then makes a call designed to fail which in turn causes the assembly to produce a log file. It works if I run the .exe in the same folder as the assembly, but fails if I run it elsewhere. For some reason, the assembly is not accessible from outside it’s folder. I can’t figure out why this won’t work. Things I have confirmed: The deployment folder has adequate permissions. We have confirmed that the folder the assembly in installed in has the correct permissions for all the necessary user accounts. The Assembly is signed with a strong name, and was registered with regasm.exe C:_WebSites\LTMRender\LTMRender.dll /codebase /tlb:C:_WebSites\LTMRender\LTMRender.tlb. Regasm reported success. The Assembly has the attribute and relevant GUID’s set correctly. Any tips? EDIT We ran filemon against my testapp.exe and it seems to have indicated what the problem is. When testapp.exe runs in D:_websites\DocWebV2\ or D:_websites\DocWebV2\ LTMRender\ folder, it succeeds and filemon is showing D:_websites\DocWebV2\LTMRender\pinPDF.dll SUCCESS If I run my testapp.exe in the D:_websites\DocWebV2\Client – where my asp pages run, it shows D:_websites\DocWebV2\pinPDF.dll NAME NOT FOUND and then D:_websites\DocWebV2\pinPDF\pinPDF.dll FILE NOT FOUND I’m not sure why it is not looking in the correct folder if it’s under this particular folder only.

    Read the article

  • use of assertions for type checking in php?

    - by user151841
    I do some checking of arguments in my classes in php using exception-throwing functions. I have functions that do a basic check ( ===, in_array etc ) and throw an exception on false. So I can do assertNumeric($argument, "\$argument is not numeric."); instead of if ( ! is_numeric($argument) ) { throw new Exception("\$argument is not numeric."); } Saves some typing I was reading in the comments of the php manual page on assert() that As noted on Wikipedia - "assertions are primarily a development tool, they are often disabled when a program is released to the public." and "Assertions should be used to document logically impossible situations and discover programming errors— if the 'impossible' occurs, then something fundamental is clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is usually unwise: assertions do not allow for graceful recovery from errors, and an assertion failure will often halt the program's execution abruptly. Assertions also do not display a user-friendly error message." This means that the advice given by "gk at proliberty dot com" to force assertions to be enabled, even when they have been disabled manually, goes against best practices of only using them as a development tool So, am I 'doing it wrong'? What other/better ways of doing this are there?

    Read the article

  • ASP.NET validators alignment issue

    - by Mahesh
    Hi, I am developing contactus webpage which have a input field called Email. It is validated against a required field validator and regular expression validator with appropriate messages. Required: Enter Email Regular Expression: Invalid Email I am setting these two as given below: <asp:TextBox ID="txtEmail" runat="server"></asp:TextBox> <font color="#FF0000">*</font> <asp:RequiredFieldValidator ID="rfvemail" CssClass="error_text" ControlToValidate="txtEmail" runat="server" ErrorMessage="Enter email address."></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ID="revemail" runat="server" ControlToValidate="txtEmail" ErrorMessage="Invalid Email" ValidationExpression="\w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"></asp:RegularExpressionValidator> My problem is both Enter Email and Invalid Email is occupying its own space. For Ex: If I leave email as empty space and press submit, Enter Email is displaying right next to it. If I enter invalid email(xxx), Enter Email is off but taking the space, Invalid Email message is displayed after these space taken by 'Enter Email' before. Is there any way to remove this space?? Mahesh

    Read the article

  • Ruby: what is the pitfall in this simple code excerpt that tests variable existence

    - by zipizap
    I'm starting with Ruby, and while making some test samples, I've stumbled against an error in the code that I don't understand why it happens. The code pretends to tests if a variable finn is defined?() and if it is defined, then it increments it. If it isn't defined, then it will define it with value 0 (zero). As the code threw an error, I started to decompose it in small pieces and run it, to better trace where the error was comming from. The code was run in IRB irb 0.9.5(05/04/13), using ruby 1.9.1p378 First I certify that the variable finn is not yet defined, and all is ok: ?> finn NameError: undefined local variable or method `finn' for main:Object from (irb):134 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' >> Then I certify that the following inline-condition executes as expected, and all is ok: ?> ((defined?(finn)) ? (finn+1):(0)) => 0 And now comes the code that throws the error: ?> finn=((defined?(finn)) ? (finn+1):(0)) NoMethodError: undefined method `+' for nil:NilClass from (irb):143 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' I was expecting that the code would not throw any error, and that after executing the variable finn would be defined with a first value of 0 (zero). But instead, the code thows the error, and finn get defined but with a value of nil. >> finn => nil Where might the error come from?!? Why does the inline-condition work alone, but not when used for the finn assignment? Any help apreciated :)

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • Core Data: Inverse relationship only mirrors when I edit the mutableset. Not sure why.

    - by zorn
    My model is setup so Business has many clients, Client has one business. Inverse relationship is setup in the mom file. I have a unit test like this: - (void)testNewClientFromBusiness { PTBusiness *business = [modelController newBusiness]; STAssertTrue([[business clients] count] == 0, @"is actually %d", [[business clients] count]); PTClient *client = [business newClient]; STAssertTrue([business isEqual:[client business]], nil); STAssertTrue([[business clients] count] == 1, @"is actually %d", [[business clients] count]); } I implement -newClient inside of PTBusiness like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; [client setBusiness:self]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The test fails because [[business clients] count] is still 0 after -newClient is called. If I impliment it like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; NSMutableSet *group = [self mutableSetValueForKey:@"clients"]; [group addObject:client]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The tests passes. My question(s): So am I right in thinking the inverse relationship is only updated when I interact with the mutable set? That seems to go against some other Core Data docs I've read. Is the fact that this is running in a unit test without a run loop have anything to do with it? Any other troubleshooting recommendations? I'd really like to figure out why I can't set up the relationship at the client end.

    Read the article

  • Wanted to know in detail about how shared libraries work vis-a-vis static library.

    - by goldenmean
    Hello, I am working on creating and linking shared library (.so). While working with them, many questions popped up which i could not find satisying answers when i searched for them, hence putting them here. The questions about shared libraries i have are: 1.) How is shared library different than static library? What are the Key differences in way they are created, they execute? 2.) In case of a shared library at what point are the addresses where a particular function in shared library will be loaded and run from, given? Who gives those functions is load/run addresses? 3.) Will an application linked against shared library be slower in execution as compared to that which is linked with a static library? 4.) Will application executable size differ in these two cases? 5.) Can one do source level debugging of by stepping into functions defined inside a shared library? Is any thing extra needed to make these functions visible to the application? 6.) What are pros and cons in using either kind of library? Thanks. -AD

    Read the article

  • TCL How to read , extract and count the occurent in .txt file (Current Directory)

    - by Passion
    Hi Folks, I am beginner to scripting and vigorously learning TCL for the development of embedded system. I have to Search for the files with only .txt format in the current directory, count the number of cases of each different "Interface # nnnn" string in .txt file, where nnnn is a four or 32 digits max hexadecimal no and o/p of a table of Interface number against occurrence. I am facing implementation issues while writing a script i.e, Unable to implement the data structure like Linked List, Two Dimensional array. I am rewriting a script using multi dimension array (Pass values into the arrays in and out of procedure) in TCL to scan through the every .txt file and search for the the string/regular expression ‘Interface # ’ to count and display the number of occurrences. If someone could help me to complete this part will be much appreciated. Search for only .txt extension files and obtain the size of the file Here is my piece of code for searching a .txt file in present directory set files [glob *.txt] if { [llength $files] > 0 } { puts "Files:" foreach f [lsort $files] { puts " [file size $f] - $f" } } else { puts "(no files)" } I reckon these are all the possible logical steps behind to complete it i) Once searched and find the .txt file then open all .txt files in read only mode ii) Create a array or list using the procedure (proc) Interface number to NULL and Interface count to zero 0 iii) Scan thro the .txt file and search for the string or regular expression "interface # iv) When a match found in .txt file, check the Interface Number and increment the count for the corresponding entry. Else add new element to the Interface Number list v) If there are no files return to the first directory My o/p is like follows Interface Frequency 123f 3 1232 4

    Read the article

  • using R.zoo to plot multiple series with error bars

    - by dnagirl
    I have data that looks like this: > head(data) groupname ob_time dist.mean dist.sd dur.mean dur.sd ct.mean ct.sd 1 rowA 0.3 61.67500 39.76515 43.67500 26.35027 8.666667 11.29226 2 rowA 60.0 45.49167 38.30301 37.58333 27.98207 8.750000 12.46176 3 rowA 120.0 50.22500 35.89708 40.40000 24.93399 8.000000 10.23363 4 rowA 180.0 54.05000 41.43919 37.98333 28.03562 8.750000 11.97061 5 rowA 240.0 51.97500 41.75498 35.60000 25.68243 28.583333 46.14692 6 rowA 300.0 45.50833 43.10160 32.20833 27.37990 12.833333 14.21800 Each groupname is a data series. Since I want to plot each series separately, I've separated them like this: > A <- zoo(data[which(groupname=='rowA'),3:8],data[which(groupname=='rowA'),2]) > B <- zoo(data[which(groupname=='rowB'),3:8],data[which(groupname=='rowB'),2]) > C <- zoo(data[which(groupname=='rowC'),3:8],data[which(groupname=='rowC'),2]) ETA: Thanks to gd047: Now I'm using this: z <- dlply(data,.(groupname),function(x) zoo(x[,3:8],x[,2])) The resulting zoo objects look like this: > head(z$rowA) dist.mean dist.sd dur.mean dur.sd ct.mean ct.sd 0.3 61.67500 39.76515 43.67500 26.35027 8.666667 11.29226 60 45.49167 38.30301 37.58333 27.98207 8.750000 12.46176 120 50.22500 35.89708 40.40000 24.93399 8.000000 10.23363 180 54.05000 41.43919 37.98333 28.03562 8.750000 11.97061 240 51.97500 41.75498 35.60000 25.68243 28.583333 46.14692 300 45.50833 43.10160 32.20833 27.37990 12.833333 14.21800 So if I want to plot dist.mean against time and include error bars equal to +/- dist.sd for each series: how do I combine A,B,C dist.mean and dist.sd? how do I make a bar plot, or perhaps better, a line graph of the resulting object?

    Read the article

  • Git Diff with Beyond Compare

    - by Avanst
    I have succeeded in getting git to start Beyond Compare 3 as a diff tool however, when I do a diff, the file I am comparing against is not being loaded. Only the latest version of the file is loaded and nothing else, so there is nothing in the right pane of Beyond Compare. I am running git 1.6.3.1 with Cygwin with Beyond Compare 3. I have set up beyond compare as they suggest in the support part of their website with a script like such: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "path_to_bc3_executable" "$2" "$5" | cat Has anyone else encountered this problem and know a solution to this? Edit: I have followed the suggestions by VonC but I am still having exactly the same problem as before. I am kinda new to Git so perhaps I am not using the diff correctly. For example, I am trying to see the diff on a file with a command like such: git diff main.css Beyond Compare will then open and only display my current main.css in the left pane, there is nothing in the right pane. I would like the see my current main.css in the left pane compared to the HEAD, basically what I have last committed. My git-diff-wrapper.sh looks like this: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "c:/Program Files/Beyond Compare 3/BCompare.exe" "$2" "$5" | cat My git config looks like this for Diff: [diff] external = c:/cygwin/bin/git-diff-wrapper.sh

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • JavaScript: input validation in the keydown event

    - by c411
    Hi, I'm attempting to do info validation against user text input in the process of keydown event. The reason that I am trying to validate in the keydown event is because I do not want to display the characters those that are considered to be illegal in the input box at the beginning. The validation I am writing is like this, function validateUserInput(){ var code = this.event.keyCode; if ((code<48||code>57) // numerical &&code!==46 //delete &&code!==8 //back space &&code!==37 // <- arrow &&code!==39) // -> arrow { this.event.preventDefault(); } } I can keep going like this, however I am seeing drawbacks on this implmentation. Those are, for example, Conditional statement become longer and longer when I put more conditions to be examined. keyCodes can be different by browsers. I have to not only check what is not legal but also have to check what are exceptionals. In above examples, delete,backspace, and arrow keys are exceptionals. But the feature that I don't want to lose is having not to display the input in the textarea unless it passes the validation. (In case the user try to put illegal characters in the textarea, nothing should appear at all) That is why I am not doing validation upon keyup event. So my question is, Are there better ways to validate input in keydown event than checking keyCode by keyCode? Are there other ways to capture the user inputs other than keydown event before browser displays it? And a way to put the validation on it? Thanks for the help in advance.

    Read the article

  • Query MySQL data from Excel (or vice-versa)

    - by Charles
    I'm trying to automate a tedious problem. I get large Excel (.xls or .csv, whatever's more convenient) files with lists of people. I want to compare these against my MySQL database.* At the moment I'm exporting MySQL tables and reading them from an Excel spreadsheet. At that point it's not difficult to use =LOOKUP() and such commands to do the work I need, and of course the various text processing I need to do is easy enough to do in Excel. But I can't help but think that this is more work than it needs to be. Is there some way to get at the MySQL data directly from Excel? Alternately, is there a way I could access a reasonably large (~10k records) csv file in a sql script? This seems to be rather basic, but I haven't managed to make it work so far. I found an ODBC connection for MySQL but that doesn't seem to do what I need. In particular, I'm testing whether the name matches or whether any of four email addresses match. I also return information on what matched for the benefit of the next person to use the data, something like "Name 'Bob Smith' not found, but 'Robert Smith' matches on email address robert.smith@foo".

    Read the article

  • Interpolating environment variables into a string in Ruby using String#scan

    - by robc
    I'm trying to interpolate environment variables into a string in Ruby and not having much luck. One of my requirements is to do something (log an error, prompt for input, whatever) if a placeholder is found in the initial string that has no matching environment variable. It looks like the block form of String#scan is what I need. Below is an irb session of my failed attempt. irb(main):014:0> raw_string = "need to replace %%FOO%% and %%BAR%% in here" => "need to replace %%FOO%% and %%BAR%% in here" irb(main):015:0> cooked_string << raw_string => "need to replace %%FOO%% and %%BAR%% in here" irb(main):016:0> raw_string.scan(/%%(.*?)%%/) do |var| irb(main):017:1* cooked_string.sub!("%%#{var}%%", ENV[var]) irb(main):018:1> done irb(main):019:1> end TypeError: cannot convert Array into String from (irb):17:in `[]' from (irb):17 from (irb):16:in `scan' from (irb):16 from :0 If I use ENV["FOO"] to manually interpolate one of those, it works fine. I'm banging my head against the desk. What am I doing wrong? Ruby is 1.8.1 on RHEL or 1.8.7 on Cygwin.

    Read the article

  • Photoshop CS5 not recognising activeDocument

    - by Max Kielland
    I wrote a quite big script for Photoshop CS5.1 on my 64bit Vista machine. Now when I run the very same script on my new 64bit Windows 7 machine, Adobe ExtendScript Tool complains about activeDocument (no such element) in this simple script: #target photoshop var pDoc = app.activeDocument; alert("Done!"); I have tried both and without #target and choosing the target in the ExtendedScript Tool. Is there something I have missed, or do I need to install something more. I only installed the 64bit version of Photoshop. Is it so that the 32bit Photoshop has the script extensions? I don't see why I need to install both 32bit and 64bit versions if I'm only going to use the 64bit version. EDIT I installed the 32bit version as well. Tried the same script against 32 and 64 bit, still no difference. SOLVED The mystery is solved. It is embarrassing simple if you interpret the error message more careful. Of course I can't get an activeDocument if there are no documents in Photoshop, duh!?! I interpreted it as the statement activeDocument wasn't recognised, but of course if I have no document there is no such element (as a photoshop document) to give me. I'm used to C++ and would expect the reuslt to be a NULL value or similar if there is a problem to get the document... excuses, excuses ;) Well, if someone else should get into the same problem, here is the answer on my expense :D

    Read the article

  • How do I build latest Tycho

    - by hedefalk
    I've tried to build Tycho now for a couple of hours and just can't get it to work. I've followed these instructions: https://docs.sonatype.org/display/TYCHO/BuildingTycho So, I've downloaded Eclipse 3.6RC2 and Delta-packs linked from this instruction (is it for 3.5 only?): http:// (remove space) aniefer.blogspot.com/2009/06/using-deltapack-in-eclipse-35.html I've added the DeltaPack to the TargetPlatform inside of the Eclipse-installation. I've installed Maven: Apache Maven 3.0-beta-1 (r935667; 2010-04-19 19:00:39+0200) I can run the first bootstrap of the build, but the second fails: mvn clean install -e -V -Pbootstrap-2 -Dtycho.targetPlatform=$TYCHO_TARGET_PLATFORM ERROR] Internal error: java.lang.RuntimeException: Could not resolve plugin org.eclipse.core.net.linux.x86_null -> [Help 1] I've tried different stuff, I built an older revision against 3.5 as in this blogpost: http:// (remove space) divby0.blogspot.com/2010/03/im-in-love-with-tycho-08-and-maven-3.html and that actually built a running maven, but that version then can't find the tycho plugin: org.apache.maven.plugin.version.PluginVersionResolutionException: Error resolving version for plugin 'org.codehaus.tycho:maven-tycho-plugin' from the repositories [local (/Users/viktor/.m2/repository), central (http://repo1.maven.org/maven2)]: Plugin not found in any plugin repository I thought that the point was that the plugin was going to build in when I had built a Tycho-dist…? Sorry about the links, stackoverflows spam-protection doesn't let me post more than url yet

    Read the article

  • Using objective-c objects with an NSDictionary

    - by Mark
    I want store a URL against a UILabel so that when a user touches the label it takes them to that URL in a UIWebView. I have declared a NSDictionary like so: NSMutableArray *linksArray = [[NSMutableArray alloc] init]; [linksArray addObject: [NSValue valueWithNonretainedObject: newsItem1ReadMoreLabel]]; [linksArray addObject: [NSValue valueWithNonretainedObject: newsItem2ReadMoreLabel]]; [linksArray addObject: [NSValue valueWithNonretainedObject: newsItem3ReadMoreLabel]]; [linksArray addObject: [NSValue valueWithNonretainedObject: newsItem4ReadMoreLabel]]; [linksArray addObject: [NSValue valueWithNonretainedObject: newsItem5ReadMoreLabel]]; //NSString *ageLink = @"http://www.theage.com.au"; NSArray *defaultLinks = [NSArray arrayWithObjects: @"1", @"2", @"3", @"4", @"5", nil]; self.urlToLinkDictionary = [[NSMutableDictionary alloc] init]; self.urlToLinkDictionary = [NSDictionary dictionaryWithObjects:defaultLinks forKeys:linksArray]; Considering I used a NSValue as the key, how do I get/set the URL associated with that key given that I only have references to the UILabels? this is what I have but it doesn't work: for(NSValue *key in [self.urlToLinkDictionary allKeys]) { if ([key nonretainedObjectValue] == linkedLabel) { [self.urlToLinkDictionary setValue:[newsItem link] forKey: key]; } } but I get an error: "objc_exception_throw" resolved

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

  • Localizing DataAnnotations Custom Validator

    - by Gabe G
    Hello SO, I'm currently working in an MVC 2 app which has to have everything localized in n-languages (currently 2, none of them english btw). I validate my model classes with DataAnnotations but when I wanted to validate a DateTime field found out that the DataTypeAttribute returns always true, no matter it was a valid date or not (that's because when I enter a random string "foo", the IsValid() method checks against "01/01/0001 ", dont know why). Decided to write my own validator extending ValidationAtribute class: public class DateTimeAttribute : ValidationAttribute { public override bool IsValid(object value) { DateTime result; if (value.ToString().Equals("01/01/0001 0:00:00")) { return false; } return DateTime.TryParse(value.ToString(), out result); } } Now it checks OK when is valid and when it's not, but my problem starts when I try to localize it: [Required(ErrorMessageResourceType = typeof(MSG), ErrorMessageResourceName = "INS_DATA_Required")] [CustomValidation.DateTime(ErrorMessageResourceType = typeof(MSG), ErrorMessageResourceName = "INS_DATA_DataType")] public DateTime INS_DATA { get; set; } If I put nothing in the field I get a localized MSG (MSG being my resource class) for the key=INS_DATA_Required but if I put a bad-formatted date I get the "The value 'foo' is not valid for INS_DATA" default message and not the localized MSG. What am I misssing?

    Read the article

  • Dynamically created operators

    - by Gero
    I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. Im solving it using bruteforce, i generate all non repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. Im doing a CASE. Is there some way or some programming language wich allows dinamic creation of operators? So i can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, i tryed it with a trivial puzzle, and solved it in a second.

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >