Search Results

Search found 2520 results on 101 pages for 'matt bear'.

Page 100/101 | < Previous Page | 96 97 98 99 100 101  | Next Page >

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • ASP.NET MVC 2 "value" in IsValid override in DataAnnotation attribute passed is null, when incorrect

    - by goldenelf2
    Hello to all! This is my first question here on stack overflow. i need help on a problem i encountered during an ASP.NET MVC2 project i am currently working on. I should note that I'm relatively new to MVC design, so pls bear my ignorance. Here goes : I have a regular form on which various details about a person are shown. One of them is "Date of Birth". My view is like this <div class="form-items"> <%: Html.Label("DateOfBirth", "Date of Birth:") %> <%: Html.EditorFor(m => m.DateOfBirth) %> <%: Html.ValidationMessageFor(m => m.DateOfBirth) %> </div> I'm using an editor template i found, to show only the date correctly : <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.DateTime?>"%> <%= Html.TextBox("", (Model.HasValue ? Model.Value.ToShortDateString() : string.Empty))%> I used LinqToSql designer to create my model from an sql database. In order to do some validation i made a partial class Person to extend the one created by the designer (under the same namespace) : [MetadataType(typeof(IPerson))] public partial class Person : IPerson { //To create buddy class } public interface IPerson { [Required(ErrorMessage="Please enter a name")] string Name { get; set; } [Required(ErrorMessage="Please enter a surname")] string Surname { get; set; } [Birthday] DateTime? DateOfBirth { get; set; } [Email(ErrorMessage="Please enter a valid email")] string Email { get; set; } } I want to make sure that a correct date is entered. So i created a custom DataAnnotation attribute in order to validate the date : public class BirthdayAttribute : ValidationAttribute { private const string _errorMessage = "Please enter a valid date"; public BirthdayAttribute() : base(_errorMessage) { } public override bool IsValid(object value) { if (value == null) { return true; } DateTime temp; bool result = DateTime.TryParse(value.ToString(), out temp); return result; } } Well, my problem is this. Once i enter an incorrect date in the DateOfBirth field then no custom message is displayed even if use the attribute like [Birthday(ErrorMessage=".....")]. The message displayed is the one returned from the db ie "The value '32/4/1967' is not valid for DateOfBirth.". I tried to enter some break points around the code, and found out that the "value" in attribute is always null when the date is incorrect, but always gets a value if the date is in correct format. The same ( value == null) is passed also in the code generated by the designer. This thing is driving me nuts. Please can anyone help me deal with this? Also if someone can tell me where exactly is the point of entry from the view to the database. Is it related to the model binder? because i wanted to check exactly what value is passed once i press the "submit" button. Thank you.

    Read the article

  • Internet Explorer percent based layout issue

    - by Tom
    Heya, My goal is to make a layout that is 200% width and height, with four containers of equal height and width (100% each), using no javascript as the bear minimum (or preferably no hacks). Right now I am using HTML5, and CSS display:table. It works fine in Safari 4, Firefox 3.5, and Chrome 5. I haven't tested it yet on older versions. Nonetheless, in IE7 and IE8 this layout fails completely. (I do use the Javascript HTML5 enabling script /cc../, so it should not be the use of new HTML5 tags) Here is what I have: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta charset="UTF-8" /> <title>IE issue with layout</title> <style type="text/css" media="all"> /* styles */ @import url("reset.css"); /* Generall CSS */ .table { display:table; } .row { display:table-row; } .cell { display:table-cell; } /* Specific CSS */ html, body { //overflow:hidden; I later intend to limit the viewport } section#body { position:absolute; width:200%; height:200%; overflow:hidden; } section#body .row { width:200%; height:50%; overflow:hidden; } section#body .row .cell { width:50%; overflow:hidden; } section#body .row .cell section { display:block; width:100%; height:100%; overflow:hidden; } section#body #stage0 section header { text-align:center; height:20%; display:block; } section#body #stage0 section footer { display:block; height:80%; } </style> </head> <body> <section id="body" class="table"> <section class="row"> <section id="stage0" class="cell"> <section> <header> <form> <input type="text" name="q" /> <input type="submit" value="Search" /> </form> </header> <footer> <table id="scrollers"> </table> </footer> </section> </section> <section id="stage1" class="cell"> <section> content </section> </section> </section> <section class="row"> <section id="stage2" class="cell"> <section> content </section> </section> <section id="stage3" class="cell"> <section> content </section> </section> </section> </section> </body> </html> You can see it live here: http://www.tombarrasso.com/ie-issue/

    Read the article

  • Export GridView to Excel

    - by nCdy
    using Matt's util code (a bit edited for Unicode text) public class GridViewExportUtil { /// <param name="fileName"></param> /// <param name="gv"></param> public static void Export(string fileName, GridView gv) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.ContentType = "application/ms-excel"; HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Charset = System.Text.Encoding.Unicode.EncodingName; HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.Unicode; HttpContext.Current.Response.BinaryWrite(System.Text.Encoding.Unicode.GetPreamble()); HttpContext.Current.Response.AddHeader( "content-disposition", string.Format(//"content-disposition", "attachment; filename=Report.xml"));//, fileName)); // Need .XLS file using (StringWriter sw = new StringWriter()) { using (HtmlTextWriter htw = new HtmlTextWriter(sw)) { // Create a form to contain the grid Table table = new Table(); // add the header row to the table if (gv.HeaderRow != null) { GridViewExportUtil.PrepareControlForExport(gv.HeaderRow); table.Rows.Add(gv.HeaderRow); } // add each of the data rows to the table foreach (GridViewRow row in gv.Rows) { GridViewExportUtil.PrepareControlForExport(row); table.Rows.Add(row); } // add the footer row to the table if (gv.FooterRow != null) { GridViewExportUtil.PrepareControlForExport(gv.FooterRow); table.Rows.Add(gv.FooterRow); } // render the table into the htmlwriter table.RenderControl(htw); // render the htmlwriter into the response HttpContext.Current.Response.Write(sw.ToString()); HttpContext.Current.Response.End(); } } } /// <summary> /// Replace any of the contained controls with literals /// </summary> /// <param name="control"></param> private static void PrepareControlForExport(Control control) { for (int i = 0; i < control.Controls.Count; i++) { Control current = control.Controls[i]; if (current is LinkButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as LinkButton).Text)); } else if (current is ImageButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as ImageButton).AlternateText)); } else if (current is HyperLink) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as HyperLink).Text)); } else if (current is DropDownList) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as DropDownList).SelectedItem.Text)); } else if (current is CheckBox) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as CheckBox).Checked ? "True" : "False")); } if (current.HasControls()) { GridViewExportUtil.PrepareControlForExport(current); } } } Question : How to make downloaded file editable (not Read only) And ... XLS wont opens with Unicode format. When I changing format to UTF8 I can't see Russian words :S Second question : How to make Unicode for .xls Third question : How can I save table lines ? Thank you.

    Read the article

  • Move <option> to top of list with Javascript

    - by Adam
    I'm trying to create a button that will move the currently selected OPTION in a SELECT MULTIPLE list to the top of that list. I currently have OptionTransfer.js implemented, which is allowing me to move items up and down the list. I want to add a new function function MoveOptionTop(obj) { ... } Here is the source of OptionTransfer.js // =================================================================== // Author: Matt Kruse // WWW: http://www.mattkruse.com/ // // NOTICE: You may use this code for any purpose, commercial or // private, without any further permission from the author. You may // remove this notice from your final code if you wish, however it is // appreciated by the author if at least my web site address is kept. // // You may *NOT* re-distribute this code in any way except through its // use. That means, you can include it in your product, or your web // site, or any other form where the code is actually being used. You // may not put the plain javascript up on your site for download or // include it in your javascript libraries for download. // If you wish to share this code with others, please just point them // to the URL instead. // Please DO NOT link directly to my .js files from your site. Copy // the files to your server and use them there. Thank you. // =================================================================== /* SOURCE FILE: selectbox.js */ function hasOptions(obj){if(obj!=null && obj.options!=null){return true;}return false;} function selectUnselectMatchingOptions(obj,regex,which,only){if(window.RegExp){if(which == "select"){var selected1=true;var selected2=false;}else if(which == "unselect"){var selected1=false;var selected2=true;}else{return;}var re = new RegExp(regex);if(!hasOptions(obj)){return;}for(var i=0;i(b.text+"")){return 1;}return 0;});for(var i=0;i3){var regex = arguments[3];if(regex != ""){unSelectMatchingOptions(from,regex);}}if(!hasOptions(from)){return;}for(var i=0;i=0;i--){var o = from.options[i];if(o.selected){from.options[i] = null;}}if((arguments.length=0;i--){if(obj.options[i].selected){if(i !=(obj.options.length-1) && ! obj.options[i+1].selected){swapOptions(obj,i,i+1);obj.options[i+1].selected = true;}}}} function removeSelectedOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){var o=from.options[i];if(o.selected){from.options[i] = null;}}from.selectedIndex = -1;} function removeAllOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){from.options[i] = null;}from.selectedIndex = -1;} function addOption(obj,text,value,selected){if(obj!=null && obj.options!=null){obj.options[obj.options.length] = new Option(text, value, false, selected);}} /* SOURCE FILE: OptionTransfer.js */ function OT_transferLeft(){moveSelectedOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferRight(){moveSelectedOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllLeft(){moveAllOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllRight(){moveAllOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_saveRemovedLeftOptions(f){this.removedLeftField = f;} function OT_saveRemovedRightOptions(f){this.removedRightField = f;} function OT_saveAddedLeftOptions(f){this.addedLeftField = f;} function OT_saveAddedRightOptions(f){this.addedRightField = f;} function OT_saveNewLeftOptions(f){this.newLeftField = f;} function OT_saveNewRightOptions(f){this.newRightField = f;} function OT_update(){var removedLeft = new Object();var removedRight = new Object();var addedLeft = new Object();var addedRight = new Object();var newLeft = new Object();var newRight = new Object();for(var i=0;i0){str=str+delimiter;}str=str+val;}return str;} function OT_setDelimiter(val){this.delimiter=val;} function OT_setAutoSort(val){this.autoSort=val;} function OT_setStaticOptionRegex(val){this.staticOptionRegex=val;} function OT_init(theform){this.form = theform;if(!theform[this.left]){alert("OptionTransfer init(): Left select list does not exist in form!");return false;}if(!theform[this.right]){alert("OptionTransfer init(): Right select list does not exist in form!");return false;}this.left=theform[this.left];this.right=theform[this.right];for(var i=0;i

    Read the article

  • Marshal a C# struct to C++ VARIANT

    - by jortan
    To start with, I'm not very familiar with the COM-technology, this is the first time I'm working with it so bear with me. I'm trying to call a COM-object function from C#. This is the interface in the idl-file: [id(6), helpstring("vConnectInfo=ConnectInfoType")] HRESULT ConnectTarget([in,out] VARIANT* vConnectInfo); This is the interop interface I got after running tlbimp: void ConnectTarget(ref object vConnectInfo); The c++ code in COM object for the target function: STDMETHODIMP PCommunication::ConnectTarget(VARIANT* vConnectInfo) { if (!((vConnectInfo->vt & VT_ARRAY) && (vConnectInfo->vt & VT_BYREF))) { return E_INVALIDARG; } ConnectInfoType *pConnectInfo = (ConnectInfoType *)((*vConnectInfo->pparray)->pvData); ... } This COM-object is running in another process, it is not in a dll. I can add that the COM object is also used from another program written in C++. In that case there is no problem because in C++ a VARIANT is created and pparray-pvData is set to the connInfo data-structure and then the COM-object is called with the VARIANT as parameter. In C#, as I understand, my struct should be marshalled as a VARIANT automatically. These are two methods I've been using (or actually I've tried a lot more...) to call this method from C#: private void method1_Click(object sender, EventArgs e) { pcom.PCom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; m_ci = new ConnectInfoType(); fillConnInfo(ref m_ci); mgmt.ConnectTarget(m_ci); } In the above case the struct gets marshalled as VT_UNKNOWN. This is a simple case and works if the parameter is not a struct (eg. works for int). private void method4_Click(object sender, EventArgs e) { ConnectInfoType ci = new ConnectInfoType(); fillConnInfo(ref ci); pcom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; ParameterModifier[] pms = new ParameterModifier[1]; ParameterModifier pm = new ParameterModifier(1); pm[0] = true; pms[0] = pm; object[] param = new object[1]; param[0] = ci; object[] args = new object[1]; args[0] = param; mgmt.GetType().InvokeMember("ConnectTarget", BindingFlags.InvokeMethod, null, mgmt, args, pms, null, null); } In this case it gets marshalled as VT_ARRAY | VT_BYREF | VT_VARIANT. The problem is that when debugging the "target-function" ConnectTarget I cannot find the data I send in the SAFEARRAY-struct (or in any other place in memory either) What do I do with a VT_VARIANT? Any ideas on how to get my struct-data? Update: The ConnectInfoType struct: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public class ConnectInfoType { public short network; public short nodeNumber; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 51)] public string connTargPassWord; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sConnectId; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] public string sConnectPassword; public EnuConnectType eConnectType; public int hConnectHandle; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sAccessPassword; }; And the corresponding struct in c++: typedef struct ConnectInfoType { short network; short nodeNumber; char connTargPassWord[51]; char sConnectId[8]; char sConnectPassword[16]; EnuConnectType eConnectType; int hConnectHandle; char sAccessPassword[8]; } ConnectInfoType;

    Read the article

  • CakeDC Users Plugin - I Can't Send Emails

    - by JimBadger
    I apologise for the rambling nature of this question, please bear with me and I'll provide all the extra info needed for you to stop me going mad from failing at something that looks inherently very straightforward... I've just installed CakePHP 2.2, and the first thing I've done is add the cakeDC Users plugin. It's all working, apart from sending an email verification when a user registers. I've tried so many combinations of different things in email.php, that I have now utterly got my knickers in a twist. Whatever I do, when the verification email should be sent, all I get is: No connection could be made because the target machine actively refused it. My email.php currently looks like this: class EmailConfig { public $default = array( 'transport' => 'Smtp', 'from' => '[email protected]', //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); public $smtp = array( 'transport' => 'Smtp', 'from' => array('Blah <[email protected]>' => 'Chimp'), 'host' => 'ssl://smtp.gmail.com', 'port' => 465, 'timeout' => 30, 'username' => '[email protected]', 'password' => 'secret', 'client' => null, 'log' => false, //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); public $fast = array( 'from' => '[email protected]', 'sender' => null, 'to' => null, 'cc' => null, 'bcc' => null, 'replyTo' => null, 'readReceipt' => null, 'returnPath' => null, 'messageId' => true, 'subject' => null, 'message' => null, 'headers' => null, 'viewRender' => null, 'template' => false, 'layout' => false, 'viewVars' => null, 'attachments' => null, 'emailFormat' => null, 'transport' => 'Smtp', 'host' => 'blah.net', 'port' => 25, 'timeout' => 30, 'username' => 'user', 'password' => 'secret', 'client' => null, 'log' => true, //'charset' => 'utf-8', //'headerCharset' => 'utf-8', ); } How do I get cakeDC Users plugin to just send a non-SMTP email? Or do I have to use, for example, my Gmail details? But, if I do have to go down the SMTP route, what is wrong with the above? Other info: I'm using the latest version of XAMPP and my PHP install is ssl enabled.

    Read the article

  • Android SQLlite crashes after trying retrieving data from it

    - by Pavel
    Hey everyone. I'm kinda new to android programming so please bear with me. I'm having some problems with retrieving records from the db. Basically, all I want to do is to store latitudes and longitudes which GPS positioning functions outputs and display them in a list using ListActivity on different tab later on. This is how the code for my DBAdapter helper class looks like: public class DBAdapter { public static final String KEY_ROWID = "_id"; public static final String KEY_LATITUDE = "latitude"; public static final String KEY_LONGITUDE = "longitude"; private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "coords"; private static final String DATABASE_TABLE = "coordsStorage"; private static final int DATABASE_VERSION = 1; private static final String DATABASE_CREATE = "create table coordsStorage (_id integer primary key autoincrement, " + "latitude integer not null, longitude integer not null);"; private final Context context; private DatabaseHelper DBHelper; private SQLiteDatabase db; public DBAdapter(Context ctx) { this.context = ctx; DBHelper = new DatabaseHelper(context); } private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS titles"); onCreate(db); } } //---opens the database--- public DBAdapter open() throws SQLException { db = DBHelper.getWritableDatabase(); return this; } //---closes the database--- public void close() { DBHelper.close(); } //---insert a title into the database--- public long insertCoords(int latitude, int longitude) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_LATITUDE, latitude); initialValues.put(KEY_LONGITUDE, longitude); return db.insert(DATABASE_TABLE, null, initialValues); } //---deletes a particular title--- public boolean deleteTitle(long rowId) { return db.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) 0; } //---retrieves all the titles--- public Cursor getAllTitles() { return db.query(DATABASE_TABLE, new String[] { KEY_ROWID, KEY_LATITUDE, KEY_LONGITUDE}, null, null, null, null, null); } //---retrieves a particular title--- public Cursor getTitle(long rowId) throws SQLException { Cursor mCursor = db.query(true, DATABASE_TABLE, new String[] { KEY_ROWID, KEY_LATITUDE, KEY_LONGITUDE}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } //---updates a title--- /*public boolean updateTitle(long rowId, int latitude, int longitude) { ContentValues args = new ContentValues(); args.put(KEY_LATITUDE, latitude); args.put(KEY_LONGITUDE, longitude); return db.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) 0; }*/ } Now my question is - am I doing something wrong here? If yes could someone please tell me what? And how I should retrieve the records in organised list manner? Help would be greatly appreciated!!

    Read the article

  • Recursing data into a 2 dimensional array in PHP 5

    - by user315699
    I'm getting bamboozled by "for each" loops and two dimensional arrays, and I'm a php newb so please bear with me (and ignore any variables with the word "image" - it's all about the mp3s, I just didn't change it from the xml tutorial) I found a php function on the net that list files in a directory, the output of which is: Array ( [0] = audio/1.mp3 [1] = audio/2.mp3 [2] = audio/3.mp3 [3] = audio/4.mp3 [4] = audio/5.mp3 ) As expected. And another that lists some info about mp3 files. $mp3datafile = 'audio/1.mp3'; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); The output of which is Array ( [Filesize] = 31972 [Encoding] = CBR [etc] ) In order to automatically build RSS for a podcast, I need to generate XML for each item. So far so good. This is how I'm making the xml foreach ($imagearray as $key = $val) { $tnsoundfile = $xml_generator-addChild('item'); $tnsoundfile-addChild('title', $podcasttitle); $enclosure = $tnsoundfile-addChild('enclosure'); $enclosure-addAttribute('url', $val); // that's the filename $enclosure-addAttribute('length', $mp3dataArray[Filesize]); // << Length is file length, not time. But later I also need $mp3dataArray[Length mm:ss] for duration tag. $enclosure-addAttribute('type', 'audio/mpeg'); $tnsoundfile-addChild('guid', $path_to_image_dir.'/'.$val); } (The above has been truncated, I realise it's not proper xml right now, but it was just to show what was going on). Perfect. But I need to do it for as many files as there are in the directory. So, I have an array of the names of the files in the directory in $mp3data And, I have an array of mp3 data in $mp3dataArray from one iteration of the get_metadata() function. If I do the following, then I get a nice list of the mp3 data of the 5 files in the directory: foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); } As expected. Where I'm struggling, and have been for most of the day in spite of reading many forums and tutorials, is how to populate the "second dimension" of the array, so that it goes through 1,2,3,4 and 5.mp3 (or however many there are), extracts the metadata, then allows me to use it in the xml section above. Here's what I have foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); $mp3testarray = array($mp3dataArray); } print_r($mp3dataArray); Shouldn't that line print_r($mp3dataArray); give me a nice list of 5 lots of mp3 data, in the way it did when I recursed through the loop as before? Cos this is driving me nuts! It must be something so simple, and any help would be greatly appreciated. Thank you in advance.

    Read the article

  • How to remove the xmlns:XSI and xmlns:XSD namespaces from the xml output of a webservice in .net fra

    - by Chetan
    HI, This is an old question, i have seen some solutions on this forum itself, but am trying to use webservices for the first time so please bear with me on this one. I have a webservice that returns XML in the following format <subs xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" msisdn="965xxxxxx"> <shortcode label="XXXX"> <channels> <channel> <id>442</id> <name>News</name> <billingperiod>7</billingperiod> <billingamount>3</billingamount> <lastbilling>4/14/2010 1:41:11 PM</lastbilling> </channel> <channel> <id>443</id> <name>News2</name> <billingperiod>7</billingperiod> <billingamount>3</billingamount> <lastbilling>4/14/2010 1:41:19 PM</lastbilling> </channel> </channels> </shortcode> </subs> I want the same XMl output without the xmlns:xsd and xmlns:xsi tags. I have tried the following solution that was suggested: Public Function GetSubscription(....) As String Dim namespaces As New XmlSerializerNamespaces namespaces.Add(String.Empty, String.Empty) Dim serializer As New XmlSerializer(SubsDetail.GetType) Dim sw As New System.IO.StringWriter Dim writer As New System.Xml.XmlTextWriter(sw) writer.Formatting = Formatting.None serializer.Serialize(writer, SubsDetail, namespaces) writer.Close() Return sw.toString() The result was that I got an xml in the following format: <string> <?xml version="1.0" encoding="utf-16"?><subs msisdn="965xxxx"> <shortcode label="XXXX"> <channels> <channel> <id>442</id> <name>News</name> <billingperiod>7</billingperiod> <billingamount>3</billingamount> <lastbilling>4/14/2010 1:41:11 PM</lastbilling> </channel> <channel> <id>443</id> <name>News2</name> <billingperiod>7</billingperiod> <billingamount>3</billingamount> <lastbilling>4/14/2010 1:41:19 PM</lastbilling> </channel> </channels> </shortcode> </subs> </string> Though the format of the xml is correct it is coming as string within the <string> tags. This is really driving me nuts. Can I get the output as xml without the outer string tags?

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • How to call virtual function of an object in C++

    - by SoonDead
    I'm struggling with calling a virtual function in C++. I'm not experienced in C++, I mainly use C# and Java so I might have some delusions, but bear with me. I have to write a program where I have to avoid dynamic memory allocation if possible. I have made a class called List: template <class T> class List { public: T items[maxListLength]; int length; List() { length = 0; } T get(int i) const { if (i >= 0 && i < length) { return items[i]; } else { throw "Out of range!"; } }; // set the value of an already existing element void set(int i, T p) { if (i >= 0 && i < length) { items[i] = p; } else { throw "Out of range!"; } } // returns the index of the element int add(T p) { if (length >= maxListLength) { throw "Too many points!"; } items[length] = p; return length++; } // removes and returns the last element; T pop() { if (length > 0) { return items[--length]; } else { throw "There is no element to remove!"; } } }; It just makes an array of the given type, and manages the length of it. There is no need for dynamic memory allocation, I can just write: List<Object> objects; MyObject obj; objects.add(obj); MyObject inherits form Object. Object has a virtual function which is supposed to be overridden in MyObject: struct Object { virtual float method(const Input& input) { return 0.0f; } }; struct MyObject: public Object { virtual float method(const Input& input) { return 1.0f; } }; I get the elements as: objects.get(0).method(asdf); The problem is that even though the first element is a MyObject, the Object's method function is called. I'm guessing there is something wrong with storing the object in an array of Objects without dynamically allocating memory for the MyObject, but I'm not sure. Is there a way to call MyObject's method function? How? It's supposed to be a heterogeneous collection btw, so that's why the inheritance is there in the first place. If there is no way to call the MyObject's method function, then how should I make my list in the first place?

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Binding Contents of a ListBox to Selected Item from Another Listbox w/JSF & Managed Beans

    - by nieltown
    Hi there! I currently have a JSF application that uses two listboxes. The first, say ListBox1, contains a list of manufacturers; each option comes from the database (via a method getManufacturers in a class called QueryUtil), is populated dynamically, and its value is an integer ID. This part works so far. My goal is to populate a second listbox, say ListBox2, with a list of products sold by the manufacturer. This list of products will come from the database as well (products are linked to a manufacturer via a foreign key relationship, via a method getProducts in my QueryUtil class). How do I go about doing this? Please bear in mind that I've been working with JSF for under 24 hours; I'm willing to accept the fact that I'm missing something very rudimentary. Again, I've been able to populate the list of manufacturers just fine; unfortunately, adding the Products component gives me java.lang.IllegalArgumentException: argument type mismatch faces-config.xml <managed-bean> <managed-bean-name>ProductsBean</managed-bean-name> <managed-bean-class>com.nieltown.ProductsBean</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> <managed-property> <property-name>manufacturers</property-name> <property-class>java.util.ArrayList</property-class> <value>#{manufacturers}</value> </managed-property> <managed-property> <property-name>selectedManufacturer</property-name> <property-class>java.lang.Integer</property-name> <value>#{selectedManufacturer}</value> </managed-property> <managed-property> <property-name>products</property-name> <property-class>java.util.ArrayList</property-class> <value>#{products}</value> </managed-property> <managed-property> <property-name>selectedProduct</property-name> <property-class>java.lang.Integer</property-name> <value>#{selectedProducts}</value> <managed-property> </managed-bean> com.nieltown.ProductsBean public class ProductsBean { private List<SelectItem> manufacturers; private List<SelectItem> products; private Integer selectedManufacturer; private Integer selectedProduct; public ProductsBean() {} public Integer getSelectedManufacturer(){ return selectedManufacturer; } public void setSelectedManufacturer(Integer m){ selectedManufacturer = m; } public Integer getSelectedProduct(){ return selectedProduct; } public void setSelectedProduct(Integer p){ selectedProduct = p; } public List<SelectItem> getManufacturers(){ if(manufacturers == null){ SelectItem option; plans = new ArrayList<SelectItem>(); QueryUtil qu = new QueryUtil(); Map<Integer, String> manufacturersMap = qu.getManufacturers(); for(Map.Entry<Integer, String> entry : manufacturersMap.entrySet()){ option = new SelectItem(entry.getKey(),entry.getValue()); manufacturers.add(option); } } return manufacturers; } public void setManufacturers(List<SelectItem> l){ manufacturers = l; } public List<SelectItem> getProducts(){ if(products == null){ SelectItem option; products = new ArrayList<SelectItem>(); if(selectedPlan != null){ PMTQueryUtil pqu = new PMTQueryUtil(); Map<Integer, String> productsMap = pqu.getProducts(); for(Map.Entry<Integer, String> entry : productsMap.entrySet()){ option = new SelectItem(entry.getKey(),entry.getValue()); products.add(option); } } } return products; } public void setProducts(List<SelectItem> l){ products = l; } } JSP fragment <h:selectOneListbox id="manufacturerList" value="#{ProductsBean.selectedManufacturer}" size="10"> <f:selectItems value="#{ProductsBean.manufacturers}" /> </h:selectOneListbox> <h:selectOneListbox id="productList" value="#{PendingPlansBean.selectedProduct}" size="10"> <f:selectItems binding="#{ProductsBean.products}" /> </h:selectOneListbox>

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Uploadify Minimum Image Width And Height

    - by Richard Knop
    So I am using the Uplodify plugin to allow users to upload multiple images at once. The problem is I need to set a minimum width and height for images. Let's say 150x150px is the smallest image users can upload. How can I set this limitation in the Uploadify plugin? When user tries to upload smaller picture, I would like to display some error message as well. Here is the PHP file that is called bu the plugin to upload images: <?php define('BASE_PATH', substr(dirname(dirname(__FILE__)), 0, -22)); // set the include path set_include_path(BASE_PATH . '/../library' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path()); // autoload classes from the library function __autoload($class) { include str_replace('_', '/', $class) . '.php'; } $configuration = new Zend_Config_Ini(BASE_PATH . '/application' . '/configs/application.ini', 'development'); $dbAdapter = Zend_Db::factory($configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($dbAdapter); function _getTable($table) { include BASE_PATH . '/application/modules/default/models/' . $table . '.php'; return new $table(); } $albums = _getTable('Albums'); $media = _getTable('Media'); if (false === empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $extension = end(explode('.', $_FILES['Filedata']['name'])); // insert temporary row into the database $data = array(); $data['type'] = 'photo'; $data['type2'] = 'public'; $data['status'] = 'temporary'; $data['user_id'] = $_REQUEST['user_id']; $paths = $media->add($data, $extension, $dbAdapter); // save the photo move_uploaded_file($tempFile, BASE_PATH . '/public/' . $paths[0]); // create a thumbnail include BASE_PATH . '/library/My/PHPThumbnailer/ThumbLib.inc.php'; $thumb = PhpThumbFactory::create(BASE_PATH . '/public/' . $paths[0]); $thumb->adaptiveResize(85, 85); $thumb->save(BASE_PATH . '/public/' . $paths[1]); // add watermark to the bottom right corner $pathToFullImage = BASE_PATH . '/public/' . $paths[0]; $size = getimagesize($pathToFullImage); switch ($extension) { case 'gif': $im = imagecreatefromgif($pathToFullImage); break; case 'jpg': $im = imagecreatefromjpeg($pathToFullImage); break; case 'png': $im = imagecreatefrompng($pathToFullImage); break; } if (false !== $im) { $white = imagecolorallocate($im, 255, 255, 255); $font = BASE_PATH . '/public/fonts/arial.ttf'; imagefttext($im, 13, // font size 0, // angle $size[0] - 132, // x axis (top left is [0, 0]) $size[1] - 13, // y axis $white, $font, 'HunnyHive.com'); switch ($extension) { case 'gif': imagegif($im, $pathToFullImage); break; case 'jpg': imagejpeg($im, $pathToFullImage, 100); break; case 'png': imagepng($im, $pathToFullImage, 0); break; } imagedestroy($im); } echo "1"; } And here's the javascript: $(document).ready(function() { $('#photo').uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : '/flash-uploader/scripts/upload-public-photo.php', 'cancelImg' : '/flash-uploader/cancel.png', 'scriptData' : {'user_id' : 'USER_ID'}, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152, 'fileExt' : '*.jpg;*.jpeg;*.gif;*.png', 'wmode' : 'transparent', 'onComplete' : function() { $.get('/my-account/temporary-public-photos', function(data) { $('#temporaryPhotos').html(data); }); } }); $('#upload_public_photo').hover(function() { var titles = '{'; $('.title').each(function() { var title = $(this).val(); if ('Title...' != title) { var id = $(this).attr('name'); id = id.substr(5); title = jQuery.trim(title); if (titles.length > 1) { titles += ','; } titles += '"' + id + '"' + ':"' + title + '"'; } }); titles += '}'; $('#titles').val(titles); }); }); Now bear in mind that I know how to check images dimensions in the PHP file. But I'm not sure how to modify the javascript so it won't upload images with very small dimensions.

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • C++ Mutexes and STL Lists Across Subclasses

    - by Genesis
    I am currently writing a multi-threaded C++ server using Poco and am now at the point where I need to be keeping information on which users are connected, how many connections each of them have, and given it is a proxy server, where each of those connections are proxying through to. For this purpose I have created a ServerStats class which holds an STL list of ServerUser objects. The ServerStats class includes functions which can add and remove objects from the list as well as find a user in the list an return a pointer to them so I can access member functions within any given ServerUser object in the list. The ServerUser class contains an STL list of ServerConnection objects and much like the ServerStats class it contains functions to add, remove and find elements within this list. Now all of the above is working but I am now trying to make it threadsafe. I have defined a Poco::FastMutex within the ServerStats class and can lock/unlock this in the appropriate places so that STL containers are not modified at the same time as being searched for example. I am however having an issue setting up mutexes within the ServerUser class and am getting the following compiler error: /root/poco/Foundation/include/Poco/Mutex.h: In copy constructor âServerUser::ServerUser(const ServerUser&)â: src/SocksServer.cpp:185: instantiated from âvoid __gnu_cxx::new_allocator<_Tp::construct(_Tp*, const _Tp&) [with _Tp = ServerUser]â /usr/include/c++/4.4/bits/stl_list.h:464: instantiated from âstd::_List_node<_Tp* std::list<_Tp, _Alloc::_M_create_node(const _Tp&) [with _Tp = ServerUser, _Alloc = std::allocator]â /usr/include/c++/4.4/bits/stl_list.h:1407: instantiated from âvoid std::list<_Tp, _Alloc::_M_insert(std::_List_iterator<_Tp, const _Tp&) [with _Tp = ServerUser, _Alloc = std::allocator]â /usr/include/c++/4.4/bits/stl_list.h:920: instantiated from âvoid std::list<_Tp, _Alloc::push_back(const _Tp&) [with _Tp = ServerUser, _Alloc = std::allocator]â src/SocksServer.cpp:301: instantiated from here /root/poco/Foundation/include/Poco/Mutex.h:164: error: âPoco::FastMutex::FastMutex(const Poco::FastMutex&)â is private src/SocksServer.cpp:185: error: within this context In file included from /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++allocator.h:34, from /usr/include/c++/4.4/bits/allocator.h:48, from /usr/include/c++/4.4/string:43, from /root/poco/Foundation/include/Poco/Bugcheck.h:44, from /root/poco/Foundation/include/Poco/Foundation.h:147, from /root/poco/Net/include/Poco/Net/Net.h:45, from /root/poco/Net/include/Poco/Net/TCPServerParams.h:43, from src/SocksServer.cpp:1: /usr/include/c++/4.4/ext/new_allocator.h: In member function âvoid __gnu_cxx::new_allocator<_Tp::construct(_Tp*, const _Tp&) [with _Tp = ServerUser]â: /usr/include/c++/4.4/ext/new_allocator.h:105: note: synthesized method âServerUser::ServerUser(const ServerUser&)â first required here src/SocksServer.cpp: At global scope: src/SocksServer.cpp:118: warning: âstd::string getWord(std::string)â defined but not used make: * [/root/poco/SocksServer/obj/Linux/x86_64/debug_shared/SocksServer.o] Error 1 The code for the ServerStats, ServerUser and ServerConnection classes is below: class ServerConnection { public: bool continue_connection; int bytes_in; int bytes_out; string source_address; string destination_address; ServerConnection() { continue_connection = true; } ~ServerConnection() { } }; class ServerUser { public: string username; int connection_count; string client_ip; ServerUser() { } ~ServerUser() { } ServerConnection* addConnection(string source_address, string destination_address) { //FastMutex::ScopedLock lock(_connection_mutex); ServerConnection connection; connection.source_address = source_address; connection.destination_address = destination_address; client_ip = getWord(source_address, ":"); _connections.push_back(connection); connection_count++; return &_connections.back(); } void removeConnection(string source_address) { //FastMutex::ScopedLock lock(_connection_mutex); for(list<ServerConnection>::iterator it = _connections.begin(); it != _connections.end(); it++) { if(it->source_address == source_address) { it = _connections.erase(it); connection_count--; } } } void disconnect() { //FastMutex::ScopedLock lock(_connection_mutex); for(list<ServerConnection>::iterator it = _connections.begin(); it != _connections.end(); it++) { it->continue_connection = false; } } list<ServerConnection>* getConnections() { return &_connections; } private: list<ServerConnection> _connections; //UNCOMMENTING THIS LINE BREAKS IT: //mutable FastMutex _connection_mutex; }; class ServerStats { public: int current_users; ServerStats() { current_users = 0; } ~ServerStats() { } ServerUser* addUser(string username) { FastMutex::ScopedLock lock(_user_mutex); for(list<ServerUser>::iterator it = _users.begin(); it != _users.end(); it++) { if(it->username == username) { return &(*it); } } ServerUser newUser; newUser.username = username; _users.push_back(newUser); current_users++; return &_users.back(); } void removeUser(string username) { FastMutex::ScopedLock lock(_user_mutex); for(list<ServerUser>::iterator it = _users.begin(); it != _users.end(); it++) { if(it->username == username) { _users.erase(it); current_users--; break; } } } ServerUser* getUser(string username) { FastMutex::ScopedLock lock(_user_mutex); for(list<ServerUser>::iterator it = _users.begin(); it != _users.end(); it++) { if(it->username == username) { return &(*it); } } return NULL; } private: list<ServerUser> _users; mutable FastMutex _user_mutex; }; Now I have never used C++ for a project of this size or mutexes for that matter so go easy please :) Firstly, can anyone tell me why the above is causing a compiler error? Secondly, can anyone suggest a better way of storing the information I require? Bear in mind that I need to update this info whenever connections come or go and it needs to be global to the whole server.

    Read the article

  • Avoiding Agnostic Jagged Array Flattening in Powershell

    - by matejhowell
    Hello, I'm running into an interesting problem in Powershell, and haven't been able to find a solution to it. When I google (and find things like this post), nothing quite as involved as what I'm trying to do comes up, so I thought I'd post the question here. The problem has to do with multidimensional arrays with an outer array length of one. It appears Powershell is very adamant about flattening arrays like @( @('A') ) becomes @( 'A' ). Here is the first snippet (prompt is , btw): > $a = @( @( 'Test' ) ) > $a.gettype().isarray True > $a[0].gettype().isarray False So, I'd like to have $a[0].gettype().isarray be true, so that I can index the value as $a[0][0] (the real world scenario is processing dynamic arrays inside of a loop, and I'd like to get the values as $a[$i][$j], but if the inner item is not recognized as an array but as a string (in my case), you start indexing into the characters of the string, as in $a[0][0] -eq 'T'). I have a couple of long code examples, so I have posted them at the end. And, for reference, this is on Windows 7 Ultimate with PSv2 and PSCX installed. Consider code example 1: I build a simple array manually using the += operator. Intermediate array $w is flattened, and consequently is not added to the final array correctly. I have found solutions online for similar problems, which basically involve putting a comma before the inner array to force the outer array to not flatten, which does work, but again, I'm looking for a solution that can build arrays inside a loop (a jagged array of arrays, processing a CSS file), so if I add the leading comma to the single element array (implemented as intermediate array $y), I'd like to do the same for other arrays (like $z), but that adversely affects how $z is added to the final array. Now consider code example 2: This is closer to the actual problem I am having. When a multidimensional array with one element is returned from a function, it is flattened. It is correct before it leaves the function. And again, these are examples, I'm really trying to process a file without having to know if the function is going to come back with @( @( 'color', 'black') ) or with @( @( 'color', 'black'), @( 'background-color', 'white') ) Has anybody encountered this, and has anybody resolved this? I know I can instantiate framework objects, and I'm assuming everything will be fine if I create an object[], or a list<, or something else similar, but I've been dealing with this for a little bit and something sure seems like there has to be a right way to do this (without having to instantiate true framework objects). Code Example 1 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } ### Start Program $final = @( @( 'a', 'b' ), @('c')) Display $final 0 'Initial Value' ### How do we do this part ??? ########### ## $w = @( @('d', 'e') ) ## $x = @( @('f', 'g'), @('h') ) ## # But now $w is flat, $w.length = 2 ## ## ## # Even if we put a leading comma (,) ## # in front of the array, $y will work ## # but $w will not. This can be a ## # problem inside a loop where you don't ## # know the length of the array, and you ## # need to put a comma in front of ## # single- and multidimensional arrays. ## $y = @( ,@('D', 'E') ) ## $z = @( ,@('F', 'G'), @('H') ) ## ## ## ########################################## $final += $w $final += $x $final += $y $final += $z Display $final 0 'Final Value' ### Desired final value: @( @('a', 'b'), @('c'), @('d', 'e'), @('f', 'g'), @('h'), @('D', 'E'), @('F', 'G'), @('H') ) ### As in the below: # # Initial Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # # Final Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # [2]: # [0]: 'd' # [1]: 'e' # [3]: # [0]: 'f' # [1]: 'g' # [4]: # [0]: 'h' # [5]: # [0]: 'D' # [1]: 'E' # [6]: # [0]: 'F' # [1]: 'G' # [7]: # [0]: 'H' Code Example 2 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } function funA() { $ret = @() $temp = @(0) $temp[0] = @('p', 'q') $ret += $temp Display $ret 0 'Inside Function A' return $ret } function funB() { $ret = @( ,@('r', 's') ) Display $ret 0 'Inside Function B' return $ret } ### Start Program $z = funA Display $z 0 'Return from Function A' $z = funB Display $z 0 'Return from Function B' ### Desired final value: @( @('p', 'q') ) and same for r,s ### As in the below: # # Inside Function A: # [0]: # [0]: 'p' # [1]: 'q' # # Return from Function A: # [0]: # [0]: 'p' # [1]: 'q' Thanks, Matt

    Read the article

  • How can I connect to a mail server using SMTP over SSL using Python?

    - by jakecar
    Hello, So I have been having a hard time sending email from my school's email address. It is SSL and I could only find this code online by Matt Butcher that works with SSL: import smtplib, socket version = "1.00" all = ['SMTPSSLException', 'SMTP_SSL'] SSMTP_PORT = 465 class SMTPSSLException(smtplib.SMTPException): """Base class for exceptions resulting from SSL negotiation.""" class SMTP_SSL (smtplib.SMTP): """This class provides SSL access to an SMTP server. SMTP over SSL typical listens on port 465. Unlike StartTLS, SMTP over SSL makes an SSL connection before doing a helo/ehlo. All transactions, then, are done over an encrypted channel. This class is a simple subclass of the smtplib.SMTP class that comes with Python. It overrides the connect() method to use an SSL socket, and it overrides the starttles() function to throw an error (you can't do starttls within an SSL session). """ certfile = None keyfile = None def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None): """Initialize a new SSL SMTP object. If specified, `host' is the name of the remote host to which this object will connect. If specified, `port' specifies the port (on `host') to which this object will connect. `local_hostname' is the name of the localhost. By default, the value of socket.getfqdn() is used. An SMTPConnectError is raised if the SMTP host does not respond correctly. An SMTPSSLError is raised if SSL negotiation fails. Warning: This object uses socket.ssl(), which does not do client-side verification of the server's cert. """ self.certfile = certfile self.keyfile = keyfile smtplib.SMTP.__init__(self, host, port, local_hostname) def connect(self, host='localhost', port=0): """Connect to an SMTP server using SSL. `host' is localhost by default. Port will be set to 465 (the default SSL SMTP port) if no port is specified. If the host name ends with a colon (`:') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This will override the `port' parameter. Note: This method is automatically invoked by __init__, if a host is specified during instantiation. """ # MB: Most of this (Except for the socket connection code) is from # the SMTP.connect() method. I changed only the bare minimum for the # sake of compatibility. if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: host, port = host[:i], host[i+1:] try: port = int(port) except ValueError: raise socket.error, "nonnumeric port" if not port: port = SSMTP_PORT if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) msg = "getaddrinfo returns an empty list" self.sock = None for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: self.sock = socket.socket(af, socktype, proto) if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) self.sock.connect(sa) # MB: Make the SSL connection. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) except socket.error, msg: if self.debuglevel > 0: print>>stderr, 'connect fail:', (host, port) if self.sock: self.sock.close() self.sock = None continue break if not self.sock: raise socket.error, msg # MB: Now set up fake socket and fake file classes. # Thanks to the design of smtplib, this is all we need to do # to get SSL working with all other methods. self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) self.file = smtplib.SSLFakeFile(sslobj); (code, msg) = self.getreply() if self.debuglevel > 0: print>>stderr, "connect:", msg return (code, msg) def setkeyfile(self, keyfile): """Set the absolute path to a file containing a private key. This method will only be effective if it is called before connect(). This key will be used to make the SSL connection.""" self.keyfile = keyfile def setcertfile(self, certfile): """Set the absolute path to a file containing a x.509 certificate. This method will only be effective if it is called before connect(). This certificate will be used to make the SSL connection.""" self.certfile = certfile def starttls(): """Raises an exception. You cannot do StartTLS inside of an ssl session. Calling starttls() will return an SMTPSSLException""" raise SMTPSSLException, "Cannot perform StartTLS within SSL session." And then my code: import ssmtplib conn = ssmtplib.SMTP_SSL('HOST') conn.login('USERNAME','PW') conn.ehlo() conn.sendmail('FROM_EMAIL', 'TO_EMAIL', "MESSAGE") conn.close() And got this error: /Users/Jake/Desktop/Beth's Program/ssmtplib.py:116: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) Traceback (most recent call last): File "emailer.py", line 5, in conn = ssmtplib.SMTP_SSL('HOST') File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 79, in init smtplib.SMTP.init(self, host, port, local_hostname) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 239, in init (code, msg) = self.connect(host, port) File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 131, in connect self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) AttributeError: 'module' object has no attribute 'SSLFakeSocket' Thank you!

    Read the article

  • Adding attachments to HumanTasks *beforehand*

    - by ccasares
    For an demo I'm preparing along with a partner, we need to add some attachments to a HumanTask beforehand, that is, the attachment must be associated already to the Task by the time the user opens its Form. How to achieve this?, indeed it's quite simple and just a matter of some mappings to the Task's input execData structure. Oracle BPM supports "default" attachments (which use BPM tables) or UCM-based ones. The way to insert attachments for both methods is pretty similar. With default attachments When using default attachments, first we need to have the attachment payload as part of the BPM process, that is, must be contained in a variable. Normally the attachment content is binary, so we'll need first to convert it to a base64-string (not covered on this blog entry). What we need to do is just to map the following execData parameters as part of the input of the HumanTask: execData.attachment[n].content            <-- the base64 payload data execData.attachment[n].mimeType           <-- depends on your attachment                                               (e.g.: "application/pdf") execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- TASK execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  Bear in mind that the attachment structure is a repetitive one. So if you need to add more than one attachment, you'll need to use XSLT mapping. If not, the Assign mapper automatically adds [1] for the iteration.  With UCM-based attachments With UCM-based attachments, the procedure is basically the same. We'll need to map some extra fields and not to map others. The tricky part with UCM-based attachments is what we need to know beforehand about the attachment itself. Of course, we don't need to have the payload, but a couple of information from the attachment that must be checked in already in UCM. First, let's see the mappings: execData.attachment[n].mimeType           <-- Document's dFormat attribute (1) execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- UCM execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  execData.attachment[n].uri                <-- "ecm://<dID>" where dID is document's dID                                      attribute (2) execData.attachment[n].ucmDocType         <-- Document's dDocType attribute (3) execData.attachment[n].securityGroup      <-- Document's dSecurityGroup attribute (4) execData.attachment[n].revision           <-- Document's dRevisionID attribute (5) execData.attachment[n].ucmMetadataItem[1].name  <-- "DocUrl" execData.attachment[n].ucmMetadataItem[1].type  <-- STRING execData.attachment[n].ucmMetadataItem[1].value <-- Document's url attribute (6)  Where to get those (n) fields? In my case I get those from a Search call to UCM (not covered on this blog entry) As I mentioned above, we must know which UCM document we're going to attach. We may know its ID, its name... whatever we need to uniquely identify it calling the IDC Search method. This method returns ALL the info we need to attach the different fields labeled with a number above.  The only tricky one is (6). UCM Search service returns the url attribute as a context-root without hostname:port. E.g.: /cs/groups/public/documents/document/dgvs/mdaw/~edisp/ccasareswcptel000239.pdf However we do need to include the full qualified URL when mapping (6). Where to get the http://<hostname>:<port> value? Honestly, I have no clue. What I use to do is to use a BPM property that can always be modified at runtime if needed. There are some other fields that might be needed in the execData.attachment structure, like account (if UCM's is using Accounts). But for demos I've never needed to use them, so I'm not sure whether it's necessary or not. Feel free to add some comments to this entry if you know it ;-)  That's all folks. Should you need help with the UCM Search service, let me know and I can write a quick entry on that topic.

    Read the article

  • redirecting back to php file

    - by tooepic
    hello again, following code is my php file that will list the people in my text file. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>viewlist php</title> </head> <body> <h1>List</h1> <?php $file = file("peoplelist.txt"); for($i=0; $i<count($file); $i++) { $person = explode(",", $file[$i]); echo "<hr />"; echo "<table cellspacing=10><tr><td>", $i+1,".", "</td>"; echo "<td>", $person[0], "<br />"; echo $person[1], "</td></tr></table>"; } ?> <hr /> <p> <a href="sortatoz.php" target="_self">Sort A-Z</a><br /> <a href="sortztoa.php" target="_self">Sort Z-A</a><br /> </p> </body> </html> what i want to do is, when i click Sort A-Z link, the file called sortatoz.php will sort the list in my text file and redirect back to viewlist.php with the list in sort order. below is my sortatoz.php: <?php header("Location: http://myserver/workspace/viewlist.php"); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>sort a to z</title> </head> <h1>List</h1> <body> <?php $file = file("peoplelist.txt"); sort($file); for($i=0; $i<count($file); $i++) { $person = explode(",", $file[$i]); echo "<hr />"; echo "<table cellspacing=10><tr><td>", $i+1,".", "</td>"; echo "<td>", $person[0], "<br />"; echo $person[1], "</td></tr></table>"; } ?> <hr /> <p> <a href="sortvisitorsascending.php" target="_self">Sort Visitors A-Z</a><br /> <a href="sortvisitorsdescending.php" target="_self">Sort Visitors Z-A</a><br /> </p> </body> </html> now, when i click Sort A-Z link, it redirects back to viewlist.php...so I'm assuming the header() function is doing it's job. but the problem is...it's not sorting. i am very new with this, so bear with me and give me some guidance please. what can i do to my codes to redirect back viewlist.php with sorted list? thanks in advance.

    Read the article

  • setIncludesSubentities: in an NSFetchRequest is broken for entities across multiple persistent store

    - by SG
    Prior art which doesn't quite address this: http://stackoverflow.com/questions/1774359/core-data-migration-error-message-model-does-not-contain-configuration-xyz I have narrowed this down to a specific issue. It takes a minute to set up, though; please bear with me. The gist of the issue is that a persistentStoreCoordinator (apparently) cannot preserve the part of an object graph where a managedObject is marked as a subentity of another when they are stored in different files. Here goes... 1) I have 2 xcdatamodel files, each containing a single entity. In runtime, when the managed object model is constructed, I manually define one entity as subentity of another using setSubentities:. This is because defining subentities across multiple files in the editor is not supported yet. I then return the complete model with modelByMergingModels. //Works! [mainEntity setSubentities:canvasEntities]; NSLog(@"confirm %@ is super for %@", [[[canvasEntities lastObject] superentity] name], [[canvasEntities lastObject] name]); //Output: "confirm Note is super for Browser" 2) I have modified the persistentStoreCoordinator method so that it sets a different store for each entity. Technically, it uses configurations, and each entity has one and only one configuration defined. //Also works! for ( NSString *configName in [[HACanvasPluginManager shared].registeredCanvasTypes valueForKey:@"viewControllerClassName"] ) { storeUrl = [NSURL fileURLWithPath:[[self applicationDocumentsDirectory] stringByAppendingPathComponent:[configName stringByAppendingPathExtension:@"sqlite"]]]; //NSLog(@"entities for configuration '%@': %@", configName, [[[self managedObjectModel] entitiesForConfiguration:configName] valueForKey:@"name"]); //Output: "entities for configuration 'HATextCanvasController': (Note)" //Output: "entities for configuration 'HAWebCanvasController': (Browser)" if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:configName URL:storeUrl options:options error:&error]) //etc 3) I have a fetchRequest set for the parent entity, with setIncludesSubentities: and setAffectedStores: just to be sure we get both 1) and 2) covered. When inserting objects of either entity, they both are added to the context and they both are fetched by the fetchedResultsController and displayed in the tableView as expected. // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; [fetchRequest setIncludesSubentities:YES]; //NECESSARY to fetch all canvas types [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setFetchBatchSize:20]; // Set the batch size to a suitable number. [fetchRequest setAffectedStores:[[managedObjectContext persistentStoreCoordinator] persistentStores]]; [fetchRequest setReturnsObjectsAsFaults:NO]; Here is where it starts misbehaving: after closing and relaunching the app, ONLY THE PARENT ENTITY is fetched. If I change the entity of the request using setEntity: to the entity for 'Note', all notes are fetched. If I change it to the entity for 'Browser', all the browsers are fetched. Let me reiterate that during the run in which an object is first inserted into the context, it will appear in the list. It is only after save and relaunch that a fetch request fails to traverse the hierarchy. Therefore, I can only conclude that it is the storage of the inheritance that is the problem. Let's recap why: - Both entities can be created, inserted into the context, and viewed, so the model is working - Both entities can be fetched with a single request, so the inheritance is working - I can confirm that the files are being stored separately and objects are going into their appropriate stores, so saving is working - Launching the app with either entity set for the request works, so retrieval from the store is working - This also means that traversing different stores with the request is working - By using a single store instead of multiple, the problem goes away completely, so creating, storing, fetching, viewing etc is working correctly. This leaves only one culprit (to my mind): the inheritance I'm setting with setSubentities: is effective only for objects creating during the session. Either objects/entities are being stored stripped of the inheritance info, or entity inheritance as defined programmatically only applies to new instances, or both. Either of these is unacceptable. Either it's a bug or I am way, way off course. I have been at this every which way for two days; any insight is greatly appreciated. The current workaround - just using a single store - works completely, except it won't be future-proof in the event that I remove one of the models from the app etc. It also boggles the mind because I can't see why you would have all this infrastructure for storing across multiple stores and for setting affected stores in fetch requests if it by core definition (of setSubentities:) doesn't work.

    Read the article

  • Creating a new instance, C#

    - by Dave Voyles
    This sounds like a very n00b question, but bear with me here: I'm trying to access the position of my bat (paddle) in my pong game and use it in my ball class. I'm doing this because I want a particle effect to go off at the point of contact where the ball hits the bat. Each time the ball hits the bat, I receive an error stating that I haven't created an instance of the bat. I understand that I have to (or can use a static class), but I'm not sure of how to do so in this example. I've included both my Bat and Ball classes. namespace Pong { #region Using Statements using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; #endregion public class Ball { #region Fields private readonly Random rand; private readonly Texture2D texture; private readonly SoundEffect warp; private double direction; private bool isVisible; private float moveSpeed; private Vector2 position; private Vector2 resetPos; private Rectangle size; private float speed; private bool isResetting; private bool collided; private Vector2 oldPos; private ParticleEngine particleEngine; private ContentManager contentManager; private SpriteBatch spriteBatch; private bool hasHitBat; private AIBat aiBat; private Bat bat; #endregion #region Constructors and Destructors /// <summary> /// Constructor for the ball /// </summary> public Ball(ContentManager contentManager, Vector2 ScreenSize) { moveSpeed = 15f; speed = 0; texture = contentManager.Load<Texture2D>(@"gfx/balls/redBall"); direction = 0; size = new Rectangle(0, 0, texture.Width, texture.Height); resetPos = new Vector2(ScreenSize.X / 2, ScreenSize.Y / 2); position = resetPos; rand = new Random(); isVisible = true; hasHitBat = false; // Everything to do with particles List<Texture2D> textures = new List<Texture2D>(); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/circle")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/star")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/diamond")); particleEngine = new ParticleEngine(textures, new Vector2()); } #endregion #region Public Methods and Operators /// <summary> /// Checks for the collision between the bat and the ball. Sends ball in the appropriate /// direction /// </summary> public void BatHit(int block) { if (direction > Math.PI * 1.5f || direction < Math.PI * 0.5f) { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(aiBat.Position.X, aiBat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(200); break; case 2: direction = MathHelper.ToRadians(195); break; case 3: direction = MathHelper.ToRadians(180); break; case 4: direction = MathHelper.ToRadians(180); break; case 5: direction = MathHelper.ToRadians(165); break; } } else { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(bat.Position.X, bat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(310); break; case 2: direction = MathHelper.ToRadians(345); break; case 3: direction = MathHelper.ToRadians(0); break; case 4: direction = MathHelper.ToRadians(15); break; case 5: direction = MathHelper.ToRadians(50); break; } } if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(3)); } else { direction -= MathHelper.ToRadians(rand.Next(3)); } AudioManager.Instance.PlaySoundEffect("hit"); } /// <summary> /// JEP - added method to slow down ball after powerup deactivates /// </summary> public void DecreaseSpeed() { moveSpeed -= 0.6f; } /// <summary> /// Draws the ball on the screen /// </summary> public void Draw(SpriteBatch spriteBatch) { if (isVisible) { spriteBatch.Begin(); spriteBatch.Draw(texture, size, Color.White); spriteBatch.End(); // Draws sprites for particles when contact is made particleEngine.Draw(spriteBatch); } } /// <summary> /// Checks for the current direction of the ball /// </summary> public double GetDirection() { return direction; } /// <summary> /// Checks for the current position of the ball /// </summary> public Vector2 GetPosition() { return position; } /// <summary> /// Checks for the current size of the ball (for the powerups) /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Grows the size of the ball when the GrowBall powerup is used. /// </summary> public void GrowBall() { size = new Rectangle(0, 0, texture.Width * 2, texture.Height * 2); } /// <summary> /// Was used to increased the speed of the ball after each point is scored. /// No longer used, but am considering implementing again. /// </summary> public void IncreaseSpeed() { moveSpeed += 0.6f; } /// <summary> /// Check for the ball to return normal size after the Powerup has expired /// </summary> public void NormalBallSize() { size = new Rectangle(0, 0, texture.Width, texture.Height); } /// <summary> /// Check for the ball to return normal speed after the Powerup has expired /// </summary> public void NormalSpeed() { moveSpeed += 15f; } /// <summary> /// Checks to see if ball went out of bounds, and triggers warp sfx /// </summary> public void OutOfBounds() { // Checks if the player is still alive or not if (isResetting) { AudioManager.Instance.PlaySoundEffect("warp"); { // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = false; AudioManager.Instance.Dispose(); } } } /// <summary> /// Speed for the ball when Speedball powerup is activated /// </summary> public void PowerupSpeed() { moveSpeed += 20.0f; } /// <summary> /// Check for where to reset the ball after each point is scored /// </summary> public void Reset(bool left) { if (left) { direction = 0; } else { direction = Math.PI; } // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = true; position = resetPos; // Resets the ball to the center of the screen isVisible = true; speed = 15f; // Returns the ball back to the default speed, in case the speedBall was active if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(30)); } else { direction -= MathHelper.ToRadians(rand.Next(30)); } } /// <summary> /// Shrinks the ball when the ShrinkBall powerup is activated /// </summary> public void ShrinkBall() { size = new Rectangle(0, 0, texture.Width / 2, texture.Height / 2); } /// <summary> /// Stops the ball each time it is reset. Ex: Between points / rounds /// </summary> public void Stop() { isVisible = true; speed = 0; } /// <summary> /// Updates position of the ball /// </summary> public void UpdatePosition() { size.X = (int)position.X; size.Y = (int)position.Y; oldPos.X = position.X; oldPos.Y = position.Y; position.X += speed * (float)Math.Cos(direction); position.Y += speed * (float)Math.Sin(direction); bool collided = CheckWallHit(); particleEngine.Update(); // Stops the issue where ball was oscillating on the ceiling or floor if (collided) { position.X = oldPos.X + speed * (float)Math.Cos(direction); position.Y = oldPos.Y + speed * (float)Math.Sin(direction); } } #endregion #region Methods /// <summary> /// Checks for collision with the ceiling or floor. 2*Math.pi = 360 degrees /// </summary> private bool CheckWallHit() { while (direction > 2 * Math.PI) { direction -= 2 * Math.PI; return true; } while (direction < 0) { direction += 2 * Math.PI; return true; } if (position.Y <= 0 || (position.Y > resetPos.Y * 2 - size.Height)) { direction = 2 * Math.PI - direction; return true; } return true; } #endregion } } namespace Pong { using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using System; public class Bat { public Vector2 Position; public float moveSpeed; public Rectangle size; private int points; private int yHeight; private Texture2D leftBat; public float turbo; public float recharge; public float interval; public bool isTurbo; /// <summary> /// Constructor for the bat /// </summary> public Bat(ContentManager contentManager, Vector2 screenSize, bool side) { moveSpeed = 7f; turbo = 15f; recharge = 100f; points = 0; interval = 5f; leftBat = contentManager.Load<Texture2D>(@"gfx/bats/batGrey"); size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); // True means left bat, false means right bat. if (side) Position = new Vector2(30, screenSize.Y / 2 - size.Height / 2); else Position = new Vector2(screenSize.X - 30, screenSize.Y / 2 - size.Height / 2); yHeight = (int)screenSize.Y; } public void IncreaseSpeed() { moveSpeed += .5f; } /// <summary> /// The speed of the bat when Turbo is activated /// </summary> public void Turbo() { moveSpeed += 8.0f; } /// <summary> /// Returns the speed of the bat back to normal after Turbo is deactivated /// </summary> public void DisableTurbo() { moveSpeed = 7.0f; isTurbo = false; } /// <summary> /// Returns the bat to the nrmal size after the Grow/Shrink powerup has expired /// </summary> public void NormalSize() { size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); } /// <summary> /// Checks for the size of the bat /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Adds point to the player or the AI after scoring. Currently Disabled. /// </summary> public void IncrementPoints() { points++; } /// <summary> /// Checks for the number of points at the moment /// </summary> public int GetPoints() { return points; } /// <summary> /// Sets thedefault starting position for the bats /// </summary> /// <param name="position"></param> public void SetPosition(Vector2 position) { if (position.Y < 0) { position.Y = 0; } if (position.Y > yHeight - size.Height) { position.Y = yHeight - size.Height; } this.Position = position; } /// <summary> /// Checks for the current position of the bat /// </summary> public Vector2 GetPosition() { return Position; } /// <summary> /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } /// <summary> /// Resets the bat to the center location after a new game starts /// </summary> public void ResetPosition() { SetPosition(new Vector2(GetPosition().X, yHeight / 2 - size.Height)); } /// <summary> /// Used for the Growbat powerup /// </summary> public void GrowBat() { // Doubles the size of the bat collision size = new Rectangle(0, 0, leftBat.Width * 2, leftBat.Height * 2); } /// <summary> /// Used for the Shrinkbat powerup /// </summary> public void ShrinkBat() { // 1/2 the size of the bat collision size = new Rectangle(0, 0, leftBat.Width / 2, leftBat.Height / 2); } /// <summary> /// Draws the bats /// </summary> public virtual void Draw(SpriteBatch batch) { batch.Draw(leftBat, size, Color.White); } } }

    Read the article

< Previous Page | 96 97 98 99 100 101  | Next Page >