Search Results

Search found 19393 results on 776 pages for 'reference count'.

Page 718/776 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • How do I pass the value of the previous form element into an "onchange" javascript function?

    - by Jen
    Hello, I want to make some UI improvements to a page I am developing. Specifically, I need to add another drop down menu to allow the user to filter results. This is my current code: HTML file: <select name="test_id" onchange="showGrid(this.name, this.value, 'gettestgrid')"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> This is pseudo code for what I want to happen: <select name="test_id"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> <select name="statistics" onchange="showGrid(PREVIOUS.name, PREVIOUS.VALUE, THIS.value)"> <option selected>Select a data display --></option> <option value='gettestgrid'>Show averages by student</option> <option value='gethomeroomgrid'>Show averages by homeroom</option> <option value='getschoolgrid'>Show averages by school</option> </select> How do I access the previous field's name and value? Any help much appreciated, thx! Also, JS function for reference: function showGrid(name, value, phpfile) { xmlhttp=GetXmlHttpObject(); if (xmlhttp==null) { alert ("Browser does not support HTTP Request"); return; } var url=phpfile+".php"; url=url+"?"+name+"="+value; url=url+"&sid="+Math.random(); xmlhttp.onreadystatechange=stateChanged; xmlhttp.open("GET",url,true); xmlhttp.send(null); }

    Read the article

  • How to Integrate C++ compiler in Visual Studio 2008

    - by Kasun
    Hi Can someone help me with this issue? I currently working on my project for final year of my honors degree. And we are developing a application to evaluate programming assignments of student ( for 1st year student level) I just want to know how to integrate C++ compiler using C# code to compile C++ code. In our case we are loading a student C++ code into text area, then with a click on button we want to compile the code. And if there any compilation errors it will be displayed on text area nearby. (Interface is attached herewith.) And finally it able to execute the code if there aren't any compilation errors. And results will be displayed in console. We were able to do this with a C#(C# code will be loaded to text area intead of C++ code) code using inbuilt compiler. But still not able to do for C# code. Can anyone suggest a method to do this? It is possible to integrate external compiler to VS C# code? If possible how to achieve it? Very grateful if anyone will contributing to solve this matter? This is code for Build button which we proceed with C# code compiling CodeDomProvider codeProvider = CodeDomProvider.CreateProvider("csharp"); string Output = "Out.exe"; Button ButtonObject = (Button)sender; rtbresult.Text = ""; System.CodeDom.Compiler.CompilerParameters parameters = new CompilerParameters(); //Make sure we generate an EXE, not a DLL parameters.GenerateExecutable = true; parameters.OutputAssembly = Output; CompilerResults results = codeProvider.CompileAssemblyFromSource(parameters, rtbcode.Text); if (results.Errors.Count > 0) { rtbresult.ForeColor = Color.Red; foreach (CompilerError CompErr in results.Errors) { rtbresult.Text = rtbresult.Text + "Line number " + CompErr.Line + ", Error Number: " + CompErr.ErrorNumber + ", '" + CompErr.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { //Successful Compile rtbresult.ForeColor = Color.Blue; rtbresult.Text = "Success!"; //If we clicked run then launch our EXE if (ButtonObject.Text == "Run") Process.Start(Output); // Run button }

    Read the article

  • Saving JQuery Draggable Sitemap Values Correctly

    - by mdolon
    I am trying to implement Boagworld's Sitemap tutorial, however I am running into difficulty trying to correctly save the child/parent relationships. The HTML is as follows, however populated with other items as well: <input type="hidden" name="sitemap-order" id="sitemap-order" value="" /> <ul id=”sitemap”> <li id="1"> <dl> <dt><a href=”#”>expand/collapse</a> <a href=”#”>Page Title</a></dt> <dd>Text Page</dd> <dd>Published</dd> <dd><a href=”#”>delete</a></dd> </dl> <ul><!–child pages–></ul> </li> </ul> And here is the JQuery code: $('#sitemap li').prepend('<div class="dropzone"></div>'); $('#sitemap li').draggable({ handle: ' > dl', opacity: .8, addClasses: false, helper: 'clone', zIndex: 100 }); var order = ""; $('#sitemap dl, #sitemap .dropzone').droppable({ accept: '#sitemap li', tolerance: 'pointer', drop: function(e, ui) { var li = $(this).parent(); var child = !$(this).hasClass('dropzone'); //If this is our first child, we'll need a ul to drop into. if (child && li.children('ul').length == 0) { li.append('<ul/>'); } //ui.draggable is our reference to the item that's been dragged. if (child) { li.children('ul').append(ui.draggable); }else { li.before(ui.draggable); } //reset our background colours. li.find('dl,.dropzone').css({ backgroundColor: '', backgroundColor: '' }); li.find('.dropzone').css({ height: '8px', margin: '0' }); // THE PROBLEM: var parentid = $(this).parent().attr('id'); menuorder += ui.draggable.attr('id')+'=>'+parentid+','; $("#sitemap-order").val(order); }, over: function() { $(this).filter('dl').css({ backgroundColor: '#ccc' }); $(this).filter('.dropzone').css({ backgroundColor: '#aaa', height: '30px', margin: '5px 0'}); }, out: function() { $(this).filter('dl').css({ backgroundColor: '' }); $(this).filter('.dropzone').css({ backgroundColor: '', height: '8px', margin: '0' }); } }); When moving items into the top-level (without parents), the parentid value I get is of the first list item (the parent container), so I can never remove the parent value and have a top-level item. Is there a no-brainer answer that I'm just not seeing right now? Any help is appreciated.

    Read the article

  • BizTalk - generating schema from Oracle stored proc with table variable argument

    - by Ron Savage
    I'm trying to set up a simple example project in BizTalk that gets changes made to a table in a SQL Server db and updates a copy of that table in an Oracle db. On the SQL Server side, I have a stored proc named GetItemChanges() that returns a variable number of records. On the Oracle side, I have a stored proc named Update_Item_Region_Table() designed to take a table of records as a parameter so that it can process all the records returned from GetItemChanges() in one call. It is defined like this: create or replace type itemrec is OBJECT ( UPC VARCHAR2(15), REGION VARCHAR2(5), LONG_DESCRIPTION VARCHAR2(50), POS_DESCRIPTION VARCHAR2(30), POS_DEPT VARCHAR2(5), ITEM_SIZE VARCHAR2(10), ITEM_UOM VARCHAR2(5), BRAND VARCHAR2(10), ITEM_STATUS VARCHAR2(5), TIME_STAMP VARCHAR2(20), COSTEDBYWEIGHT INTEGER ); create or replace type tbl_of_rec is table of itemrec; create or replace PROCEDURE Update_Item_Region_table ( Item_Data tbl_of_rec ) IS errcode integer; errmsg varchar2(4000); BEGIN for recIndex in 1 .. Item_Data.COUNT loop update FL_ITEM_REGION_TEST set Region = Item_Data(recIndex).Region, Long_description = Item_Data(recIndex).Long_description, Pos_Description = Item_Data(recIndex).Pos_description, Pos_Dept = Item_Data(recIndex).Pos_dept, Item_Size = Item_Data(recIndex).Item_Size, Item_Uom = Item_Data(recIndex).Item_Uom, Brand = Item_Data(recIndex).Brand, Item_Status = Item_Data(recIndex).Item_Status, Timestamp = to_date(Item_Data(recIndex).Time_stamp, 'yyyy-mm-dd HH24:mi:ss'), CostedByWeight = Item_Data(recIndex).CostedByWeight where UPC = Item_Data(recIndex).UPC; log_message(Item_Data(recIndex).Region, '', 'Updated item ' || Item_Data(recIndex).UPC || '.'); end loop; EXCEPTION WHEN OTHERS THEN errcode := SQLCODE(); errmsg := SQLERRM(); log_message('CE', '', 'Error in Update_Item_Region_table(): Code [' || errcode || '], Msg [' || errmsg || '] ...'); END; In my BizTalk project I generate the schemas and binding information for both stored procedures. For the Oracle procedure, I specified a path for the GeneratedUserTypesAssemblyFilePath parameter to generate a DLL to contain the definition of the data types. In the Send Port on the server, I put the path to that Types DLL in the UserAssembliesLoadPath parameter. I created a map to translate the GetItemChanges() schema to the Update_Item_Region_Table() schema. When I run it the data is extracted and transformed fine but causes an exception trying to pass the data to the Oracle proc: *The adapter failed to transmit message going to send port "WcfSendPort_OracleDBBinding_HOST_DATA_Procedure_Custom" with URL "oracledb://dvotst/". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Custom type mapping for 'HOST_DATA.TBL_OF_REC' is not specified or is invalid.* So it is apparently not getting the information about the custom data type TBL_OF_REC into the Types DLL. Any tips on how to make this work?

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • PHP Parse Error unexpected '{'

    - by Laxmidi
    Hi, I'm getting a "Parse error: syntax error, unexpected '{' in line 2". And I don't see the problem. <?php class pointLocation {     var $pointOnVertex = true; // Check if the point sits exactly on one of the vertices     function pointLocation() {     }                   function pointInPolygon($point, $polygon, $pointOnVertex = true) {         $this->pointOnVertex = $pointOnVertex;                  // Transform string coordinates into arrays with x and y values         $point = $this->pointStringToCoordinates($point);         $vertices = array();          foreach ($polygon as $vertex) {             $vertices[] = $this->pointStringToCoordinates($vertex);          }                  // Check if the point sits exactly on a vertex         if ($this->pointOnVertex == true and $this->pointOnVertex($point, $vertices) == true) {             return "vertex";         }                  // Check if the point is inside the polygon or on the boundary         $intersections = 0;          $vertices_count = count($vertices);              for ($i=1; $i < $vertices_count; $i++) {             $vertex1 = $vertices[$i-1];              $vertex2 = $vertices[$i];             if ($vertex1['y'] == $vertex2['y'] and $vertex1['y'] == $point['y'] and $point['x'] > min($vertex1['x'], $vertex2['x']) and $point['x'] < max($vertex1['x'], $vertex2['x'])) { // Check if point is on an horizontal polygon boundary                 return "boundary";             }             if ($point['y'] > min($vertex1['y'], $vertex2['y']) and $point['y'] <= max($vertex1['y'], $vertex2['y']) and $point['x'] <= max($vertex1['x'], $vertex2['x']) and $vertex1['y'] != $vertex2['y']) {                  $xinters = ($point['y'] - $vertex1['y']) * ($vertex2['x'] - $vertex1['x']) / ($vertex2['y'] - $vertex1['y']) + $vertex1['x'];                  if ($xinters == $point['x']) { // Check if point is on the polygon boundary (other than horizontal)                     return "boundary";                 }                 if ($vertex1['x'] == $vertex2['x'] || $point['x'] <= $xinters) {                     $intersections++;                  }             }          }          // If the number of edges we passed through is even, then it's in the polygon.          if ($intersections % 2 != 0) {             return "inside";         } else {             return "outside";         }     }               function pointOnVertex($point, $vertices) {         foreach($vertices as $vertex) {             if ($point == $vertex) {                 return true;             }         }          }                   function pointStringToCoordinates($pointString) {         $coordinates = explode(" ", $pointString);         return array("x" => $coordinates[0], "y" => $coordinates[1]);     }           } $pointLocation = new pointLocation(); $points = array("30 19", "0 0", "10 0", "30 20", "11 0", "0 11", "0 10", "30 22", "20 20"); $polygon = array("10 0", "20 0", "30 10", "30 20", "20 30", "10 30", "0 20", "0 10", "10 0"); foreach($points as $key => $point) { echo "$key ($point) is " . $pointLocation->pointInPolygon($point, $polygon) . "<br>"; } ?> Does anyone see the problem? Thanks, -Laxmidi

    Read the article

  • Does my TPL partitioner cause a deadlock?

    - by Scott Chamberlain
    I am starting to write my first parallel applications. This partitioner will enumerate over a IDataReader pulling chunkSize records at a time from the data-source. protected class DataSourcePartitioner<object[]> : System.Collections.Concurrent.Partitioner<object[]> { private readonly System.Data.IDataReader _Input; private readonly int _ChunkSize; public DataSourcePartitioner(System.Data.IDataReader input, int chunkSize = 10000) : base() { if (chunkSize < 1) throw new ArgumentOutOfRangeException("chunkSize"); _Input = input; _ChunkSize = chunkSize; } public override bool SupportsDynamicPartitions { get { return true; } } public override IList<IEnumerator<object[]>> GetPartitions(int partitionCount) { var dynamicPartitions = GetDynamicPartitions(); var partitions = new IEnumerator<object[]>[partitionCount]; for (int i = 0; i < partitionCount; i++) { partitions[i] = dynamicPartitions.GetEnumerator(); } return partitions; } public override IEnumerable<object[]> GetDynamicPartitions() { return new ListDynamicPartitions(_Input, _ChunkSize); } private class ListDynamicPartitions : IEnumerable<object[]> { private System.Data.IDataReader _Input; int _ChunkSize; private object _ChunkLock = new object(); public ListDynamicPartitions(System.Data.IDataReader input, int chunkSize) { _Input = input; _ChunkSize = chunkSize; } public IEnumerator<object[]> GetEnumerator() { while (true) { List<object[]> chunk = new List<object[]>(_ChunkSize); lock(_Input) { for (int i = 0; i < _ChunkSize; ++i) { if (!_Input.Read()) break; var values = new object[_Input.FieldCount]; _Input.GetValues(values); chunk.Add(values); } if (chunk.Count == 0) yield break; } var chunkEnumerator = chunk.GetEnumerator(); lock(_ChunkLock) //Will this cause a deadlock? { while (chunkEnumerator.MoveNext()) { yield return chunkEnumerator.Current; } } } } IEnumerator IEnumerable.GetEnumerator() { return ((IEnumerable<object[]>)this).GetEnumerator(); } } } I wanted IEnumerable object it passed back to be thread safe (the .Net example was so I am assuming PLINQ and TPL could need it) will the lock on _ChunkLock near the bottom help provide thread safety or will it cause a deadlock? From the documentation I could not tell if the lock would be released on the yeld return. Also if there is built in functionality to .net that will do what I am trying to do I would much rather use that. And if you find any other problems with the code I would appreciate it.

    Read the article

  • Sync services not actually syncing

    - by Paul Mrozowski
    I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data. I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs. Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict. Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync: var agent = new FSEMobileCacheSyncAgent(); var syncStats = agent.Synchronize(); syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there. I basically followed the instructions I found here: http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx and here: http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now. I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference. Ideas? Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason? This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.

    Read the article

  • Programmatically Binding to a Property

    - by M312V
    I know it's a generic title, but my question is specific. I think it will boil down to a question of practice. So, I have the following code: public class Component : UIElement { public Component() { this.InputBindings.Add(new MouseBinding(SomeCommandProperty, new MouseGesture(MouseAction.LeftClick))); } } I could easily aggregate the ViewModel that owns SomeCommandProperty into the Component class, but I'm currently waiving that option assuming there is another way. Component is a child of ComponentCollection which is child of a Grid which DataContext is the ViewModel. ComponentCollection as the name suggests contains a collection of Components. <Grid Name="myGrid"> <someNamespace:ComponentCollection x:Name="componentCollection"/> </Grid> It's the same scenario as the XAML below, but with TextBlock. I guess I'm trying to replicate what's being done in the XAML below programatically. Again, Component's top most ancestor's DataContext is set to ViewModel. <Grid Name="myGrid"> <TextBlock Text="SomeText"> <TextBlock.InputBindings> <MouseBinding Command="{Binding SomeCommandProperty}" MouseAction="LeftClick" /> </TextBlock.InputBindings> </TextBlock> </Grid> Update 1 Sorry, I'm unable to comment because I lack the reputation points. Basically, I have a custom control which inherit from a Panel which children are a collection of Component. It's not a hack, like I've mentioned, I could directly have access to SomeCommandProperty If I aggregate the ViewModel into Component. Doing so, however, feels icky. That is, having direct access to ViewModel from a Model. I guess the question I'm asking is. Given the situation that Component's parent UIElement's DataContext is set to ViewModel, is it possible to access SomeCommandProperty without Component owning a reference to the ViewModel that owns SomeCommandProperty? Programatically, that is. Using ItemsControl doesn't change the fact that I still need to bind SomeCommandProperty to each Items.

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • One Controller is Sometimes Bound Twice with Ninject

    - by Dusda
    I have the following NinjectModule, where we bind our repositories and business objects: /// <summary> /// Used by Ninject to bind interface contracts to concrete types. /// </summary> public class ServiceModule : NinjectModule { /// <summary> /// Loads this instance. /// </summary> public override void Load() { //bindings here. //Bind<IMyInterface>().To<MyImplementation>(); Bind<IUserRepository>().To<SqlUserRepository>(); Bind<IHomeRepository>().To<SqlHomeRepository>(); Bind<IPhotoRepository>().To<SqlPhotoRepository>(); //and so on //business objects Bind<IUser>().To<Data.User>(); Bind<IHome>().To<Data.Home>(); Bind<IPhoto>().To<Data.Photo>(); //and so on } } And here are the relevant overrides from our Global.asax, where we inherit from NinjectHttpApplication in order to integrate it with Asp.Net Mvc (The module lies in a separate dll called Thing.Web.Configuration): protected override void OnApplicationStarted() { base.OnApplicationStarted(); //routes and areas AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //Initializes a singleton that must reference this HttpApplication class, //in order to provide the Ninject Kernel to the rest of Thing.Web. This //is necessary because there are a few instances (currently Membership) //that require manual dependency injection. NinjectKernel.Instance = new NinjectKernel(this); //view model factory. NinjectKernel.Instance.Kernel.Bind<IModelFactory>().To<MasterModelFactory>(); } protected override NinjectControllerFactory CreateControllerFactory() { return base.CreateControllerFactory(); } protected override Ninject.IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Load("Thing.Web.Configuration.dll"); return kernel; } Now, everything works great, with one exception: For some reason, sometimes Ninject will bind the PhotoController twice. This leads to an ActivationException, because Ninject can't discern which PhotoController I want. This causes all requests for thumbnails and other user images on the site to fail. Here is the PhotoController in it's entirety: public class PhotoController : Controller { public PhotoController() { } public ActionResult Index(string id) { var dir = Server.MapPath("~/" + ConfigurationManager.AppSettings["UserPhotos"]); var path = Path.Combine(dir, id); return base.File(path, "image/jpeg"); } } Every controller works in exactly the same way, but for some reason the PhotoController gets double-bound. Even then, it only happens occasionally (either when re-building the solution, or on staging/production when the app pool kicks in). Once this happens, it continues to happen until I redeploy without changing anything. So...what's up with that?

    Read the article

  • php download file slows

    - by hobbywebsite
    OK first off thanks for your time I wish I could give more than one point for this question. Problem: I have some music files on my site (.mp3) and I am using a php file to increment a database to count the number of downloads and to point to the file to download. For some reason this method starts at 350kb/s then slowly drops to 5kb/s which then the file says it will take 11hrs to complete. BUT if I go directly to the .mp3 file my browser brings up a player and then I can right click and "save as" which works fine complete download in 3mins. (Yes both during the same time for those that are thinking it's my connection or ISP and its not my server either.) So the only thing that I've been playing around with recently is the php.ini and the .htcaccess files. So without further ado, the php file, php.ini, and the .htcaccess: download.php <?php include("config.php"); include("opendb.php"); $filename = 'song_name'; $filedl = $filename . '.mp3'; $query = "UPDATE songs SET song_download=song_download+1 WHER song_linkname='$filename'"; mysql_query($query); header('Content-Disposition: attachment; filename='.basename($filedl)); header('Content-type: audio/mp3'); header('Content-Length: ' . filesize($filedl)); readfile('/music/' . $filename . '/' . $filedl); include("closedb.php"); ?> php.ini register_globals = off allow_url_fopen = off expose_php = Off max_input_time = 60 variables_order = "EGPCS" extension_dir = ./ upload_tmp_dir = /tmp precision = 12 SMTP = relay-hosting.secureserver.net url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset=" ; Defines the default timezone used by the date functions date.timezone = "America/Los_Angeles" .htaccess Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} !^(www.MindCollar.com)?$ [NC] RewriteRule (.*) http://www.MindCollar.com/$1 [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ErrorDocument 404 /errors/404.php ErrorDocument 403 /errors/403.php ErrorDocument 500 /errors/500.php </IfModule> Options -Indexes Options +FollowSymlinks <Files .htaccess> deny from all </Files> thanks for you time

    Read the article

  • How do I query delegation properties of an active directory user account?

    - by Mark J Miller
    I am writing a utility to audit the configuration of a WCF service. In order to properly pass credentials from the client, thru the WCF service back to the SQL back end the domain account used to run the service must be configured in Active Directory with the setting "Trust this user for delegation" (Properties - "Delegation" tab). Using C#, how do I access the settings on this tab in Active Directory. I've spent the last 5 hours trying to track this down on the web and can't seem to find it. Here's what I've done so far: using (Domain domain = Domain.GetCurrentDomain()) { Console.WriteLine(domain.Name); // get domain "dev" from MSSQLSERVER service account DirectoryEntry ouDn = new DirectoryEntry("LDAP://CN=Users,dc=dev,dc=mydomain,dc=lcl"); DirectorySearcher search = new DirectorySearcher(ouDn); // get sAMAccountName "dev.services" from MSSQLSERVER service account search.Filter = "(sAMAccountName=dev.services)"; search.PropertiesToLoad.Add("displayName"); search.PropertiesToLoad.Add("userAccountControl"); SearchResult result = search.FindOne(); if (result != null) { Console.WriteLine(result.Properties["displayName"][0]); DirectoryEntry entry = result.GetDirectoryEntry(); int userAccountControlFlags = (int)entry.Properties["userAccountControl"].Value; if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_FOR_DELEGATION) Console.WriteLine("TRUSTED_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) Console.WriteLine("TRUSTED_TO_AUTH_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.NOT_DELEGATED) == (int)UserAccountControl.NOT_DELEGATED) Console.WriteLine("NOT_DELEGATED"); foreach (PropertyValueCollection pvc in entry.Properties) { Console.WriteLine(pvc.PropertyName); for (int i = 0; i < pvc.Count; i++) { Console.WriteLine("\t{0}", pvc[i]); } } } } The "userAccountControl" does not seem to be the correct property. I think it is tied to the "Account Options" section on the "Account" tab, which is not what we're looking for but this is the closest I've gotten so far. The justification for all this is: We do not have permission to setup the service in QA or in Production, so along with our written instructions (which are notoriously only followed in partial) I am creating a tool that will audit the setup (WCF and SQL) to determine if the setup is correct. This will allow the person deploying the service to run this utility and verify everything is setup correctly - saving us hours of headaches and reducing downtime during deployment.

    Read the article

  • NHibernate and objects with value-semantics

    - by Groo
    Problem: If I pass a class with value semantics (Equals method overridden) to NHibernate, NHibernate tries to save it to db even though it just saved an entity equal by value (but not by reference) to the database. What am I doing wrong? Here is a simplified example model for my problem: Let's say I have a Person entity and a City entity. One thread (producer) is creating new Person objects which belong to a specific existing City, and another thread (consumer) is saving them to a repository (using NHibernate as DAL). Since there is lot of objects being flushed at a time, I am using Guid.Comb id's to ensure that each insert is made using a single SQL command. City is an object with value-type semantics (equal by name only -- for this example purposes only): public class City : IEquatable<City> { public virtual Guid Id { get; private set; } public virtual string Name { get; set; } public virtual bool Equals(City other) { if (other == null) return false; return this.Name == other.Name; } public override bool Equals(object obj) { return Equals(obj as City); } public override int GetHashCode() { return this.Name.GetHashCode(); } } Fluent NH mapping is something like: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); References(x => x.City) .Cascade.SaveUpdate(); } } public class CityMap : ClassMap<City> { public CityMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); Map(x => x.Name); } } Right now (with my current NHibernate mapping config), my consumer thread maintains a dictionary of cities and replaces their references in incoming person objects (otherwise NHibernate will see a new, non-cached City object and try to save it as well), and I need to do it for every produced Person object. Since I have implemented City class to behave like a value type, I hoped that NHibernate would compare Cities by value and not try to save them each time -- i.e. I would only need to do a lookup once per session and not care about them anymore. Is this possible, and if yes, what am I doing wrong here?

    Read the article

  • Marshalling non-Blittable Structs from C# to C++

    - by Greggo
    I'm in the process of rewriting an overengineered and unmaintainable chunk of my company's library code that interfaces between C# and C++. I've started looking into P/Invoke, but it seems like there's not much in the way of accessible help. We're passing a struct that contains various parameters and settings down to unmanaged codes, so we're defining identical structs. We don't need to change any of those parameters on the C++ side, but we do need to access them after the P/Invoked function has returned. My questions are: What is the best way to pass strings? Some are short (device id's which can be set by us), and some are file paths (which may contain Asian characters) Should I pass an IntPtr to the C# struct or should I just let the Marshaller take care of it by putting the struct type in the function signature? Should I be worried about any non-pointer datatypes like bools or enums (in other, related structs)? We have the treat warnings as errors flag set in C++ so we can't use the Microsoft extension for enums to force a datatype. Is P/Invoke actually the way to go? There was some Microsoft documentation about Implicit P/Invoke that said it was more type-safe and performant. For reference, here is one of the pairs of structs I've written so far: C++ /** Struct used for marshalling Scan parameters from managed to unmanaged code. */ struct ScanParameters { LPSTR deviceID; LPSTR spdClock; LPSTR spdStartTrigger; double spinRpm; double startRadius; double endRadius; double trackSpacing; UINT64 numTracks; UINT32 nominalSampleCount; double gainLimit; double sampleRate; double scanHeight; LPWSTR qmoPath; //includes filename LPWSTR qzpPath; //includes filename }; C# /// <summary> /// Struct used for marshalling scan parameters between managed and unmanaged code. /// </summary> [StructLayout(LayoutKind.Sequential)] public struct ScanParameters { [MarshalAs(UnmanagedType.LPStr)] public string deviceID; [MarshalAs(UnmanagedType.LPStr)] public string spdClock; [MarshalAs(UnmanagedType.LPStr)] public string spdStartTrigger; public Double spinRpm; public Double startRadius; public Double endRadius; public Double trackSpacing; public UInt64 numTracks; public UInt32 nominalSampleCount; public Double gainLimit; public Double sampleRate; public Double scanHeight; [MarshalAs(UnmanagedType.LPWStr)] public string qmoPath; [MarshalAs(UnmanagedType.LPWStr)] public string qzpPath; }

    Read the article

  • Backbone.js (model instanceof Model) via Chrome Extension

    - by Leoncelot
    Hey guys, This is my first time ever posting on this site and the problem I'm about to pose is difficult to articulate due to the set of variables required to arrive at it. Let me just quickly explain the framework I'm working with. I'm building a Chrome Extension using jQuery, jQuery-ui, and Backbone The entire JS suite for the extension is written in CoffeeScript and I'm utilizing Rails and the asset pipeline to manage it all. This means that when I want to deploy my extension code I run rake assets:precompile and copy the resulting compressed JS to my extensions Directory. The nice thing about this approach is that I can actually run the extension js from inside my Rails app by including the library. This is basically the same as my extensions background.js file which injects the js as a content script. Anyway, the problem I've recently encountered was when I tried testing my extension on my buddy's site, whiskeynotes.com. What I was noticing is that my backbone models were being mangled upon adding them to their respective collections. So something like this.collection.add(new SomeModel) created some nonsense version of my model. This code eventually runs into Backbone's prepareModel code _prepareModel: function(model, options) { options || (options = {}); if (!(model instanceof Model)) { var attrs = model; options.collection = this; model = new this.model(attrs, options); if (!model._validate(model.attributes, options)) model = false; } else if (!model.collection) { model.collection = this; } return model; }, Now, in most of the sites on which I've tested the extension, the result is normal, however on my buddy's site the !(model instance Model) evaluates to true even though it is actually an instance of the correct class. The consequence is a super messed up version of the model where the model's attributes is a reference to the models collection (strange right?). Needless to say, all kinds of crazy things were happening afterward. Why this is occurring is beyond me. However changing this line (!(model instanceof Model)) to (!(model instanceof Backbone.Model)) seems to fix the problem. I thought maybe it had something to do with the Flot library (jQuery graph library) creating their own version of 'Model' but looking through the source yielded no instances of it. I'm just curious as to why this would happen. And does it make sense to add this little change to the Backbone source? Update: I just realized that the "fix" doesn't actually work. I can also add that my backbone Models are namespaced in a wrapping object so that declaration looks something like class SomeNamespace.SomeModel extends Backbone.Model

    Read the article

  • Pass param to a silverlight application

    - by Lucas_Santos
    In my javascript I create my <OBJECT> tag var htmlEmbedSilverlight = "<div id='silverlightControlHost'> " + "<object data='data:application/x-silverlight-2,' type='application/x-silverlight-2' width='550px' height='250px'> " + "<param name='source' value='../../ClientBin/FotoEmprestimoChave.xap'/> " + "<param name='onError' value='onSilverlightError' /> " + "<param name='background' value='white' /> " + "<param name='minRuntimeVersion' value='4.0.60310.0' /> " + "<param name='autoUpgrade' value='true' /> " + "<param name='initparams' values='chave_id=" + data + "' /> " + "<a href='http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.60310.0' style='text-decoration:none'> " + "<img src='http://go.microsoft.com/fwlink/?LinkId=161376' alt='Get Microsoft Silverlight' style='border-style:none'/> " + "</a> " + "</object><iframe id='_sl_historyFrame' style='visibility:hidden;height:0px;width:0px;border:0px'></iframe></div>"; $("#tiraFotoSilverlight").html(htmlEmbedSilverlight); This is a reference to my Silverlight application where I call in my Web Application. The problem is my <param name='initparams' values='chave_id=" + data + "' /> " because in my App.xaml in Silverlight, I have the code below private void Application_Startup(object sender, StartupEventArgs e) { if (e.InitParams != null) { foreach (var item in e.InitParams) { this.Resources.Add(item.Key, item.Value); } } this.RootVisual = new MainPage(); } Where InitParams always has Count = 0 and I don't know why. Can someone help me ? I'm just trying to pass a value to my Silverlight application, without a PostBack. Rendered <object width="550px" height="250px" type="application/x-silverlight-2" data="data:application/x-silverlight-2,"> <param value="../../ClientBin/FotoEmprestimoChave.xap" name="source"> <param value="onSilverlightError" name="onError"> <param value="white" name="background"> <param value="4.0.60310.0" name="minRuntimeVersion"> <param value="true" name="autoUpgrade"> <param values="chave_id=1" name="initparams"> <a style="text-decoration:none" href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.60310.0"> </object>

    Read the article

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Inserting a string array as a row into an Excel document using the Open XML SDK 2.0

    - by Sam
    The code runs, but corrupts my excel document. Any help would be mucho appreciated! I used this as a reference. public void AddRow(string fileName, string[] values) { using (SpreadsheetDocument doc = SpreadsheetDocument.Open(fileName, true)) { SharedStringTablePart sharedStringPart = GetSharedStringPart(doc); WorksheetPart worksheetPart = doc.WorkbookPart.WorksheetParts.First(); uint rowIdx = AppendRow(worksheetPart); for (int i = 0; i < values.Length; ++i) { int stringIdx = InsertSharedString(values[i], sharedStringPart); Cell cell = InsertCell(i, rowIdx, worksheetPart); cell.CellValue = new CellValue(stringIdx.ToString()); cell.DataType = new EnumValue<CellValues>( CellValues.SharedString); worksheetPart.Worksheet.Save(); } } } private SharedStringTablePart GetSharedStringPart( SpreadsheetDocument doc) { if (doc.WorkbookPart. GetPartsCountOfType<SharedStringTablePart>() > 0) return doc.WorkbookPart. GetPartsOfType<SharedStringTablePart>().First(); else return doc.WorkbookPart. AddNewPart<SharedStringTablePart>(); } private uint AppendRow(WorksheetPart worksheetPart) { SheetData sheetData = worksheetPart.Worksheet. GetFirstChild<SheetData>(); uint rowIndex = (uint)sheetData.Elements<Row>().Count(); Row row = new Row() { RowIndex = rowIndex }; sheetData.Append(row); return rowIndex; } private int InsertSharedString(string s, SharedStringTablePart sharedStringPart) { if (sharedStringPart.SharedStringTable == null) sharedStringPart.SharedStringTable = new SharedStringTable(); int i = 0; foreach (SharedStringItem item in sharedStringPart.SharedStringTable. Elements<SharedStringItem>()) { if (item.InnerText == s) return i; ++i; } sharedStringPart.SharedStringTable.AppendChild( new Text(s)); sharedStringPart.SharedStringTable.Save(); return i; } private Cell InsertCell(int i, uint rowIdx, WorksheetPart worksheetPart) { SheetData sheetData = worksheetPart.Worksheet. GetFirstChild<SheetData>(); string cellReference = AlphabetMap.Instance[i] + rowIdx; Cell cell = new Cell() { CellReference = cellReference }; Row row = sheetData.Elements<Row>().ElementAt((int)rowIdx); row.InsertAt(cell, i); worksheetPart.Worksheet.Save(); return cell; }

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • ASP.Net / MySQL : Translating content into several languages

    - by philwilks
    I have an ASP.Net website which uses a MySQL database for the back end. The website is an English e-commerce system, and we are looking at the possibility of translating it into about five other languages (French, Spanish etc). We will be getting human translators to perform the translation - we've looked at automated services but these aren't good enough. The static text on the site (e.g. headings, buttons etc) can easily be served up in multiple languages via .Net's built in localization features (resx files etc). The thing that I'm not so sure about it how best to store and retrieve the multi-language content in the database. For example, there is a products table that includes these fields... productId (int) categoryId (int) title (varchar) summary (varchar) description (text) features (text) The title, summary, description and features text would need to be available in all the different languages. Here are the two options that I've come up with... Create additional field for each language For example we could have titleEn, titleFr, titleEs etc for all the languages, and repeat this for all text columns. We would then adapt our code to use the appropriate field depending on the language selected. This feels a bit hacky, and also would lead to some very large tables. Also, if we wanted to add additional languages in the future it would be time consuming to add even more columns. Use a lookup table We could create a new table with the following format... textId | languageId | content ------------------------------- 10 | EN | Car 10 | FR | Voiture 10 | ES | Coche 11 | EN | Bike 11 | FR | Vélo We'd then adapt our products table to reference the appropriate textId for the title, summary, description and features instead of having the text stored in the product table. This seems much more elegant, but I can't think of a simple way of getting this data out of the database and onto the page without using complex SQL statements. Of course adding new languages in the future would be very simple compared to the previous option. I'd be very grateful for any suggestions about the best way to achieve this! Is there any "best practice" guidance out there? Has anyone done this before?

    Read the article

  • Clicking the TableView leads you to the another View

    - by lakesh
    I am a newbie to iPhone development. I have already created an UITableView. I have wired everything up and included the delegate and datasource. However, instead of adding a detail view accessory by using UITableViewCellAccessoryDetailClosureButton, I would like to click the UITableViewCell and it should lead to another view with more details about the UITableViewCell. My view controller looks like this: ViewController.h #import <UIKit/UIKit.h> @interface ViewController : UIViewController<UITableViewDataSource,UITableViewDelegate>{ NSArray *tableItems; NSArray *images; } @property (nonatomic,retain) NSArray *tableItems; @property (nonatomic,retain) NSArray *images; @end ViewController.m #import "ViewController.h" #import <QuartzCore/QuartzCore.h> @interface ViewController () @end @implementation ViewController @synthesize tableItems,images; - (void)viewDidLoad { [super viewDidLoad]; tableItems = [[NSArray alloc] initWithObjects:@"Item1",@"Item2",@"Item3",nil]; images = [[NSArray alloc] initWithObjects:[UIImage imageNamed:@"clock.png"],[UIImage imageNamed:@"eye.png"],[UIImage imageNamed:@"target.png"],nil]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ return tableItems.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{ //Step 1:Check whether if we can reuse a cell UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"cell"]; //If there are no new cells to reuse,create a new one if(cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:(UITableViewCellStyleDefault) reuseIdentifier:@"cell"]; UIView *v = [[UIView alloc] init]; v.backgroundColor = [UIColor redColor]; cell.selectedBackgroundView = v; //changing the radius of the corners //cell.layer.cornerRadius = 10; } //Set the image in the row cell.imageView.image = [images objectAtIndex:indexPath.row]; //Step 3: Set the cell text content cell.textLabel.text = [tableItems objectAtIndex:indexPath.row]; //Step 4: Return the row return cell; } - (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath{ cell.backgroundColor = [ UIColor greenColor]; } @end Need some guidance on this.. Thanks.. Please pardon me if this is a stupid question.

    Read the article

  • How do you programmatically set a Style on a View?

    - by Greg
    I would like to do something like this: <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_cotent" style="@style/SubmitButtonType" /> But in code The xml approach works fine provided that SubmitButtonType is defined. Now what I assume happens is that the appt parser runs through this xml, generates an AttributeSet. That AttributeSet when passed to context/theme#obtainStyledAttributes() will have the style ref mask anything that is not written inline in this tag. Great that's fine! Now how do we do this programmatically. Button, as well as other View types, has a constructor that has the form: <Widget>(Context context, AttributeSet set, int defStyle). So I thought this would work. Button button = new Button(context, null, R.style.SubmitButtonType); However, I am finding that defStyle is badly documented as it really should be written to be a resourceId to an attribute (from R.attrs) that will be passed to obtainStyledAttributes() as the attribute resource, and not the style resource. After looking at the code, all the view implementations seem to pass 0 as the styleRef. I don't see the harm in having it passed as both the attr and the style resource (more flexible and negligible overhead) However I might be approaching this all wrong. How do you do this in code then other than by setting each individual element of the style to the specific widget you want to style (only possible by looking a the code to see what param maps to which method or set of methods). The only way I have found to do this is: <declare-styleable> <attr name="totallyAdhoc_attribute_just_for_this_case" format="reference"> </declare-styleable> <style name="MyAlreadyExistantTheme" > ... ... <item name="totallyAdhoc_attribute_just_for_this_case">@style/SubmitButtonType</item> </style> And instead of passing R.style.SubmitButtonType as defStyle, I pass the new R.attr.totallyAdhoc_attribute_just_for_this_case. Button button = new Button(context, null, R.attr.totallyAdhoc_attribute_just_for_this_case); This works but sounds way too complicated.

    Read the article

  • jQuery - Sorting an array?

    - by Probocop
    Hi, I'm using Ajax to get some XML, and then filling in some fields on a form with the results. There is a numerical field on the form and I would like to sort the results by this number (highest first). How would I go about doing this in jQuery? My js function code is currently: function linkCounts() { ws_url = "http://archreport.epiphanydev2.co.uk/worker.php?query=linkcounts&domain="+$('#hidden_the_domain').val(); $.ajax({ type: "GET", url: ws_url, dataType: "xml", success: function(xmlIn){ results = xmlIn.getElementsByTagName("URL"); for ( var i = 0; i < results.length; i++ ) { $("#tb_domain_linkcount_url_"+(i+1)).val($(results[i].getElementsByTagName("Page")).text()); $("#tb_domain_linkcount_num_"+(i+1)).val($(results[i].getElementsByTagName("Links")).text()); } $('#img_linkcount_worked').attr("src","/images/worked.jpg"); }, error: function(){$('#img_linkcount_worked').attr("src","/images/failed.jpg");} }); } The Links tag is the one I'm wanting to sort it on. Thanks For reference the XML that's getting returned is like the following: <?xml version="1.0" encoding="utf-8" standalone="yes"?> <Response> <ResponseCode>1</ResponseCode> <ResponseStatus>OK</ResponseStatus> <ReportId>2</ReportId> <UrlChecked /> <MaxLinks>75</MaxLinks> <PagesFound>121</PagesFound> <URLs> <URL> <Page>http://www.epiphanysolutions.co.uk/blog</Page> <Links>78</Links> </URL> <URL> <Page>http://www.epiphanysolutions.co.uk/blog/</Page> <Links>78</Links> </URL> <URL> <Page>http://www.epiphanysolutions.co.uk/blog/author/daniel-peden/</Page> <Links>78</Links> </URL> <URL> <Page>http://www.epiphanysolutions.co.uk/blog/author/daniel-peden/page/2/</Page> <Links>78</Links> </URL> </URLS> </Response>

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >