Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 711/2416 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Extracting Certain XML Elements with PHP SimpleXML

    - by Peter
    I am having some problems parsing this piece of XML using SimpleXML. There is always only one Series element, and a variable number of Episode elements beneath. I want to parse XML so I can store the Series data in one table, and all the Episode data in another table. XML: <Data> <Series> <id>80348</id> <Genre>|Action and Adventure|Comedy|Drama|</Genre> <IMDB_ID>tt0934814</IMDB_ID> <SeriesID>68724</SeriesID> <SeriesName>Chuck</SeriesName> <banner>graphical/80348-g.jpg</banner> </Series> <Episode> <id>935481</id> <Director>Robert Duncan McNeill</Director> <EpisodeName>Chuck Versus the Third Dimension 2D</EpisodeName> <EpisodeNumber>1</EpisodeNumber> <seasonid>27984</seasonid> <seriesid>80348</seriesid> </Episode> <Episode> <id>935483</id> <Director>Robert Duncan McNeill</Director> <EpisodeName>Buy More #15: Employee Health</EpisodeName> <EpisodeNumber>2</EpisodeNumber> <seasonid>27984</seasonid> <seriesid>80348</seriesid> </Episode> </Data> When I attempt to access just the first Series element and child nodes, or iterate through the Episode elements only it does not work. I have also tried to use DOMDocument with SimpleXML, but could not get that to work at all. PHP Code: <?php if(file_exists('en.xml')) { $data = simplexml_load_file('en.xml'); foreach($data as $series) { echo 'id: <br />' . $series->id; echo 'imdb: <br />' . $series->IMDB_ID; } } ?> Output: id:80348 imdb:tt0934814 id:935481 imdb: id:1534641 imdb: Any help would be greatly appreciated.

    Read the article

  • javascript exceed timeout

    - by user1866265
    I use jquery to develop mobile application, here is my code below the problem that when I add 5 or 6 line to the page contained all goes well. but if I add multiple line displays error message: Javascript execution exeeded timeout. function succes_recu_list_rubrique(tx, results) //apés avoire remplir sqlite { console.log('ENTRééééééééééééééé---') $('#lbtn').prepend("<legend>Selectionner un Rubrique</legend><br>"); for( var i=0; i<results.rows.length; i++ ) //Remplir tableau liste des identifiants étapes { $('#lbtn').append("<input name='opt1' checked type='radio' value="+results.rows.item(i).IdRubrique+" id="+results.rows.item(i).IdRubrique+" />"); $('#lbtn').append('<label for='+results.rows.item(i).IdRubrique+'>'+results.rows.item(i).LibelleRubrique+'</label>'); } $('#lbtn').append('<a href="#page_dialog2" class="offer2" data-rel="dialog" data-role="button" >Consulter</a>').trigger('create'); $('#lbtn').append('<a href="#'+id_grp_rub+'" data-role="button" data-rel="back" data-theme="c" >Cancel</a> ').trigger('create'); }

    Read the article

  • SQL error - Cannot convert nvarchar to decimal

    - by jakesankey
    I have a C# application that simply parses all of the txt documents within a given network directory and imports the data to a SQL server db. Everything was cruising along just fine until about the 1800th file when it happend to have a few blanks in columns that are called out as DBType.Decimal (and the value is usually zero in the files, not blank). So I got this error, "cannot convert nvarchar to decimal". I am wondering how I could tell the app to simply skip the lines that have this issue?? Perhaps I could even just change the column type to varchar even tho values are numbers (what problems could this create?) Thanks for any help! using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); string RNumE = Convert.ToString(RNumberE); string RNumD = Convert.ToString(RNumberD); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { if (i = "" || i = null) continue; SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • How to handle unknown initializer functions in lua?

    - by oofoe
    I want to load data written in a variant of lua (eyeonScript). However, the data is peppered with references to initialization functions that are not in plain lua: Redden = BrightnessContrast { Inputs = { Red = Input { Value = 0, }, }, } Standard lua gives "attempt to call a nil value" or "unexpected symbol" errors. Is there any way to catch these and pass it to some sort of generic initializer? I want to wind up with a nested table data structure. Thanks!

    Read the article

  • JSON in an AJAX request

    - by Josh K
    I have a PHP API I'm working with that outputs everything as JSON. I need to call one of the API methods and parse it out using an AJAX request. I am using jQuery (though it shouldn't matter). When I make the request it errors out with a "parsererror" as the textStatus and a "Syntax Error: invalid label" when I make the request. Simplified code: $.ajax ({ type: "POST", url: "http://mydomain.com/api/get/userlist/"+mid, dataType: "json", dataFilter: function(data, type) { /* Here we assume and pray */ users = eval(data); alert(users[1].id); }, success: function(data, textStatus, XMLHttpRequest) { alert(data.length); // Should be an array, yet is undefined. }, error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus); alert(errorThrown); }, complete: function(XMLHttpRequest, textStatus) { alert("Done"); } });

    Read the article

  • A smarter way to do this jQuery?

    - by Nicky Christensen
    I have a a map on my site, and clicking regions a class should be toggled, on both hover and click, I've made a jQuery solution to do this, however, I think it can be done a bit smarter than i've done it? my HTML output is this: <div class="mapdk"> <a data-class="nordjylland" class="nordjylland" href="#"><span>Nordjylland</span></a> <a data-class="midtjylland" class="midtjylland" href="#"><span>Midtjylland</span></a> <a data-class="syddanmark" class="syddanmark" href="#"><span>Syddanmark</span></a> <a data-class="sjaelland" class="sjalland" href="#"><span>Sjælland</span></a> <a data-class="hovedstaden" class="hovedstaden" href="#"><span>Hovedstaden</span></a> </div> And my jQuery looks like: if ($jq(".area .mapdk").length) { $jq(".mapdk a.nordjylland").hover(function () { $jq(".mapdk").toggleClass("nordjylland"); }).click(function () { $jq(".mapdk").toggleClass("nordjylland"); }); $jq(".mapdk a.midtjylland").hover(function () { $jq(".mapdk").toggleClass("midtjylland"); }).click(function () { $jq(".mapdk").toggleClass("midtjylland"); }); } The thing is, that with what i've done, i have to make a hover and click function for every link i've got - I was thinking I might could keep it in one hover,click function, and then use something like $jq(this) ? But not sure how ?

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Group variables in a boxplot in R

    - by tao.hong
    I am trying to generate a boxplot whose data come from two scenarios. In the plot, I would like to group boxes by their names (So there will be two boxes per variable). I know ggplot would be a good choice. But I got errors which I could not figure out. Can anyone give me some suggestions? sensitivity_out1 structure(c(0.0522902104339716, 0.0521369824334004, 0.0520240345973737, 0.0519818337359876, 0.051935071418996, 0.0519089404325544, 0.000392698277338341, 0.000326135474295325, 0.000280863338343747, 0.000259631566041935, 0.000246594043996332, 0.000237923540393391, 0.00046732650331544, 0.000474448907808135, 0.000478287273678457, 0.000480194683464109, 0.000480631753078668, 0.000481760272726273, 0.000947965771207979, 0.000944821699830455, 0.000939631071343889, 0.000937186900570605, 0.000936007346568281, 0.000934756220144141, 0.00132442589501872, 0.00132658367774979, 0.00133334696220742, 0.00133622384928092, 0.0013381577476241, 0.00134005741746304, 0.0991622968751298, 0.100791399440082, 0.101946808417405, 0.102524244727408, 0.102920085260477, 0.103232984259916, 0.0305219507186844, 0.0304635269233494, 0.0304161055015213, 0.0303742106794513, 0.0303381888169022, 0.0302996157711171, 1.94268588634518e-05, 2.23991225564447e-05, 2.5756135487907e-05, 2.79997917298194e-05, 3.00753967077715e-05, 3.16270817369878e-05, 0.544701146678523, 0.542887331601984, 0.541632986366816, 0.541005610554556, 0.540617004208336, 0.540315690692195, 0.000453386694666078, 0.000448473414508756, 0.00044692043197248, 0.000444826296854332, 0.000445747996014684, 0.000444764303682453, 0.000127569551159321, 0.000128422491392669, 0.00012933662856487, 0.000129941842982939, 0.000129578971489026, 0.000131113075233758, 0.00684610571790029, 0.00686349387897349, 0.00687468164010565, 0.00687880720347743, 0.00688275579317197, 0.00687822247621936), .Dim = c(6L, 12L)) out2 structure(c(0.0189965816735366, 0.0189995096225103, 0.0190099362589894, 0.0190033523148514, 0.01900896721937, 0.0190099427513381, 0.00192043989797585, 0.00207303208721059, 0.00225931163225165, 0.0024049969048389, 0.00252310364086785, 0.00262940166568126, 0.00195164921633517, 0.00190079923515755, 0.00186139563778548, 0.00184188171395076, 0.00183248544676564, 0.00182492970673969, 1.83038731485927e-05, 1.98252671720347e-05, 2.14794764479231e-05, 2.30713122969332e-05, 2.4484220713564e-05, 2.55958833705284e-05, 0.0428066864455102, 0.0431686808647809, 0.0434411033615353, 0.0435883377765726, 0.0436690169266633, 0.0437340464360965, 0.145288252474567, 0.141488776430307, 0.138204532539654, 0.136281799717717, 0.134864952272761, 0.133738386148036, 0.0711728636959696, 0.072031388688795, 0.0727536853228245, 0.0731581966147734, 0.0734424337399303, 0.0736637270702609, 0.000605277151497094, 0.000617268349064968, 0.000632975679951382, 0.000643904422677427, 0.000653775268094148, 0.000662225067910141, 0.26735354610469, 0.267515415990146, 0.26753155165617, 0.267553498616325, 0.267532284594615, 0.267510330320289, 0.000334158771646756, 0.000319032383145857, 0.000306074699839994, 0.000299153278494114, 0.000293956197852583, 0.000290171804454218, 0.000645975219899115, 0.000637548672578787, 0.000632375486965757, 0.000629579821884212, 0.000624956458229123, 0.000622456283217054, 0.0645188290106884, 0.0651539609630352, 0.0656417364889907, 0.0658996698322889, 0.0660715073023965, 0.0662034341510152), .Dim = c(6L, 12L)) Melt data: group variable value 1 1 PLDKRT 0 2 1 PLDKRT 0 3 1 PLDKRT 0 4 1 PLDKRT 0 5 1 PLDKRT 0 6 1 PLDKRT 0 Code: #Data_source 1 sensitivity_1=rbind(sensitivity_out1,sensitivity_out2) sensitivity_1=data.frame(sensitivity_1) colnames(sensitivity_1)=main_l #variable names sensitivity_1$group=1 #Data_source 2 sensitivity_2=rbind(sensitivity_out1[3:4,],sensitivity_out2[3:4,]) sensitivity_2=data.frame(sensitivity_2) colnames(sensitivity_2)=main_l sensitivity_2$group=2 sensitivity_pool=rbind(sensitivity_1,sensitivity_2) sensitivity_pool_m=melt(sensitivity_pool,id.vars="group") ggplot(data = sensitivity_pool_m, aes(x = variable, y = value)) + geom_boxplot(aes( fill= group), width = 0.8) Error: "Error in unit(tic_pos.c, "mm") : 'x' and 'units' must have length > 0" Update Figure out the error. I should use geom_boxplot(aes( fill= factor(group)), width = 0.8) rather than fill= group

    Read the article

  • ArrayAdapter throwing ArrayIndexOutOfBoundsException

    - by alex
    I am getting an error of an array out of bounce error when i am using my custom array adapter. I am wondering if there are any coding errors I have overlooked. Here is the error log 06-10 20:21:53.254: E/AndroidRuntime(315): FATAL EXCEPTION: main 06-10 20:21:53.254: E/AndroidRuntime(315): java.lang.RuntimeException: Unable to start activity ComponentInfo{alex.android.galaxy.tab.latest/alex.android.galaxy.tab.latest.Basic_db_output}: java.lang.ArrayIndexOutOfBoundsException 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.os.Handler.dispatchMessage(Handler.java:99) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.os.Looper.loop(Looper.java:123) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.main(ActivityThread.java:4627) 06-10 20:21:53.254: E/AndroidRuntime(315): at java.lang.reflect.Method.invokeNative(Native Method) 06-10 20:21:53.254: E/AndroidRuntime(315): at java.lang.reflect.Method.invoke(Method.java:521) 06-10 20:21:53.254: E/AndroidRuntime(315): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 06-10 20:21:53.254: E/AndroidRuntime(315): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 06-10 20:21:53.254: E/AndroidRuntime(315): at dalvik.system.NativeStart.main(Native Method) 06-10 20:21:53.254: E/AndroidRuntime(315): Caused by: java.lang.ArrayIndexOutOfBoundsException 06-10 20:21:53.254: E/AndroidRuntime(315): at alex.android.galaxy.tab.latest.Basic_db_output.onCreate(Basic_db_output.java:44) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-10 20:21:53.254: E/AndroidRuntime(315): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) I am basing my code example on this How to use ArrayAdapter<myClass> ArrayList list = new ArrayList(); for(int i=1; i <= 3; i++) { Reader reader = new ResultsReader("android_galaxy_tab_latest/src/quiz"+i+".txt"); reader.read(); String str = ((ResultsReader)reader).getInput(); String data[] = str.split("<.>"); Question q = new Question(); q.question = data[0]; q.answer = Integer.parseInt(data[1]); q.choice1 = data[2]; q.choice2 = data[3]; q.choice3 = data[4]; list.add(q); }

    Read the article

  • Python 2D list has weird behavor when trying to modify a single value...

    - by Brian
    Hi guys, So I am relatively new to Python and I am having trouble working with 2D Lists. Here's my code: data = [[None]*5]*5 data[0][0] = 'Cell A1' print data and here is the output (formatted for readability): [['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None]] Why does every row get assigned the value?

    Read the article

  • Naive Bayes matlab, row classification

    - by Jungle Boogie
    How do you classify a row of seperate cells in matlab? Atm I can classify single coloums like so: training = [1;0;-1;-2;4;0;1]; % this is the sample data. target_class = ['posi';'zero';'negi';'negi';'posi';'zero';'posi']; % target_class are the different target classes for the training data; here 'positive' and 'negetive' are the two classes for the given training data % Training and Testing the classifier (between positive and negative) test = 10*randn(25, 1); % this is for testing. I am generating random numbers. class = classify(test,training, target_class, 'diaglinear') % This command classifies the test data depening on the given training data using a Naive Bayes classifier Unlike the above im looking at wanting to classify: A B C Row A | 1 | 1 | 1 = a house Row B | 1 | 2 | 1 = a garden Can anyone help? Here is a code example from matlabs site: nb = NaiveBayes.fit(training, class) nb = NaiveBayes.fit(..., 'param1',val1, 'param2',val2, ...) I dont understand what param1 is or what val1 etc should be?

    Read the article

  • Is a 2006 Mac Mini's processor upgradeable to a Core 2 Duo?

    - by chiurox
    I have a Mac Mini with Core Solo 1.5GHz, 512mb RAM, 60gb HDD, etc... I know, it's very old but since it's lying around here, I wanted to bump it up for general usage and some experimental iPhone development. Also, Snow Leopard can't be installed as it doesn't have enough RAM. I browsed around but I'm not sure if this Mac Mini's motherboard accepts a Core2Duo (at least a 2.0GHz). If anyone could inform me which generation of Core2Duo it still accepts, I'd be grateful.

    Read the article

  • asynchronous sockets

    - by user158182
    i need to receive an acknowledgement from server when receives a data for confirming that data has received the server and how to use GetSocketOption() in asynchronous method

    Read the article

  • Cluster analysis on two columns that contain name of person in R

    - by Alka Shah
    I am a beginner in R. I have to do cluster analysis in data that contains two columns with name of persons. I converted it in data frame but it is character type. To use dist() function the data frame must be numeric. example of my data: Interviewed.Type interviewed.Relation.Type 1. An1 Xuan 2. An2 The 3. An3 Ngoc 4. Bui Thi 5. ANT feed 7. Bach Thi 8. Gian1 Thi 9. Lan5 Thi . . . 1100. Xung Van I will be grateful for your help.

    Read the article

  • Sessionstate not being saved between pages

    - by Grant
    Hi, i am having problems with an asp.net c# site whereby i am setting a session state object to true and then redirecting to another page that needs to check the value of the session state object and it is null. Sometimes it is set correctly and other times is is simply null. When i debug on my local machine it works perfectly every time. Only when i upload to my web server does this temperamental behaviour happen. As it is based around the security of the site it is obviously important that the session data be valid and accurate every time. Is session state data unreliable? AFAIK its set to inproc, cookieless, 30 min timeout, vanilla installation of IIS. Does anyone have any suggestions? Perhaps i need to thread.sleep inbetween the storing of the session data and the reading? NB: the time between the write and the read is about 70ms.. ample time for the data to be written to RAM.....

    Read the article

  • lua userdata gc

    - by anon
    Is it possible for a piece of lua user data to hold reference to a lua object? (Like a table, or another piece of user data?). Basically, what I want to know is: Can I create a piece of userdata in such a way taht when the gc runs, the user data can say: "Hey! I'm holding references to these other objects, mark them as well." Thanks!

    Read the article

  • How to return 'null' instead of 'undefined' from select element

    - by MrW
    How can I return null instead of a 'undefined' from a select element. Basically I have 4 selects at the moment, but only the first one is populated with data at this point, but I want to grab the value from all of them when working with them. <select class="changeValue" id="drpOne"></select> <option id=1>1</option> <option id=2>2</option> <option id=3>3</option> <select class="changeValue" id="drpTwo"></select> JQuery: $('.changeValue').change(function() { var data = {}; data["Id1"] = $('#drpOne:selected').attr("id"); data["Id2"] = $('#drpTwo:selected').attr("id"); In this case, drpTwo will return 'undefined'. Is there anyway to get a null instead?

    Read the article

  • Parsing the json and storing it in an array?

    - by Prateek Raj
    hi everyone, i'm very new to this web related problems, please help i'm working on sencha which proves to be very difficult wen it comes to json parsing . . . . so i'm planning on retrieving the data to the html page and then load it into my js file. . . so here is the problem: i've already asked about it and got a reply.. http://jsbin.com/uwuca5 but now wen i use the html source code locally in my system or even by using the IIS i couldn't parse the data. . . . . . . here is the link for my json file: http://compliantbox.com/optionsedge/sample.php i'm trying to use this link in my code and retrive the data but the data is returning null Please Help Thank you,

    Read the article

  • Using the stadard Java logging, is it possible to restart logs after a certain period?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Reading from a database located in the Program Files folder using ODBC

    - by Dabblernl
    We have an application that stores its database files in a subfolder of the Program Files directory. These files are redirected to the VirtualStore in Vista and Windows 7. We represent data from the database using Microsoft DataReports (VB6). So far so good. But we now want to use Crystal Reports XI to represent data from the database. Our idea is to NOT pass this data to CR from our program, but to have CR retreive it from the database using a a system DSN through ODBC. In this way we hope to present our users with more flexibility in designing their own reports. What we do want to ensure though is that these system DSNs are configured correctly when the user installs our program or when the program calls the Crystal Report. Is there a smart way to do this using System variables for instance, instead of having to write a routine that checks for OS-version, whether UAC is enabled on the OS, whether the write restrictions on the Program Files folder have been lifted, etc and then adapts he System DSN to point to either the C:\Program Files\OurApp\Data folder, or the C:\Users\User\AppData\VirtualStore\Program Files\OurApp\Data folder? Suggestions for an entirely different approach are welcome too!

    Read the article

  • Adding custom columns to Propel model?

    - by Hard-Boiled Wonderland
    At the moment I am using the below query: $claims = ClaimQuery::create('c') ->leftJoinUser() ->withColumn('CONCAT(User.Firstname, " ", User.Lastname)', 'name') ->withColumn('User.Email', 'email') ->filterByArray($conditions) ->paginate($page = $page, $maxPerPage = $top); However I then want to add columns manually, so I thought this would simply work: foreach($claims as &$claim){ $claim->actions = array('edit' => array( 'url' => $this->get('router')->generate('hera_claims_edit'), 'text' => 'Edit' ) ); } return array('claims' => $claims, 'count' => count($claims)); However when the data is returned Propel or Symfony2 seems to be stripping the custom data when it gets converted to JSON along with all of the superflous model data. What is the correct way of manually adding data this way?

    Read the article

  • VBScript - image to binary

    - by countnazgul
    Hi to all, i'm not a VB programmer but i need a vbscript that convert image file (from local disk) to be converted to binary data and the passed to webservice. I realize how to pass data to webservice but i can't find how to convert the image file to binary data. I spend a lot of time to find some kind of solution but with no luck. Can somebody help me? Thanks!

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >