Search Results

Search found 5245 results on 210 pages for 'confused or too stupid'.

Page 199/210 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • Android Marketplace Error: "The server could not process your apk. Try again."

    - by jdandrea
    I have an updated apk - tested successfully on various devices and simulator instances - with the following manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.myCompany.appName" android:versionCode="2" android:versionName="1.0.1"> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="5" /> <uses-permission android:name="android.permission.INTERNET" /> <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" /> <application android:icon="@drawable/icon" android:label="@string/icon_name" android:debuggable="false"> <activity android:name=".myActivity" android:configChanges="keyboardHidden|orientation"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> When I post to Android Marketplace as an upgrade to my existing 1.0 app, I get the aforementioned ambiguous message: "The server could not process your apk. Try again." I've searched elsewhere for this message in hopes of finding out what might be happening, to no avail. (A popular suggestion is to move the uses-sdk element to the top of the manifest, but as you can see it's already at the top.) Clues welcome/appreciated. Update: I just tried to upload the same file again. Now I get a new message: The new apk's versionCode (2) in AndroidManifest.xml must be higher than the old apk's versionCode (2). The server could not process your apk. Try again. Soooo Marketplace did get my upgraded apk after all? (The very first accepted apk's versionCode was 1, so this update was of course bumped to 2.) Confused … Bumping it up to 3 and trying again. Surprise surprise, I get the original "could not process" error all over again. Going in circles. Hmm ... :( Nuther Update: If I exit and re-enter the Marketplace page, now it shows that the app has been uploaded! Except there's no app icon. Curiouser and curiouser ... and this is all happening with a cache-cleared (standards-friendly) browser to boot. So - do I trust the upload? Or start over ... with versionCode="4"? All I want is to get a solid "Upload successful, here's the icon, ready to publish" type of response.

    Read the article

  • tinymce not working with chrome when i dynamically setcontent

    - by oo
    I have a site that i put: <body onload="ajaxLoad()" > I have a javascript function that then shove data from my db into the text editor by using the setContent method in javascript of the textarea. seems fine in firefox and IE but in chrome sometimes nothing shows up. no error, just blank editor in the body section: <textarea id="elm1" name="elm1" rows="40" cols="60" style="width: 100%"> </textarea> in the head section: function ajaxLoad() { var ed = tinyMCE.get('elm1'); ed.setProgressState(1); // Show progress window.setTimeout(function() { ed.setProgressState(0); // Hide progress ed.setContent('<p style="text-align: center;"><strong><br /><span style="font-size: small;">General Manager&#39;s Corner</span></strong></p><p style="text-align: center;">August&nbsp;2009</p><p>It&rsquo;s been 15<sup>th</sup> and so have a Steak Night (Saturday, 15<sup>th</sup>) and a shore Dinner planned (Saturday, 22<sup>nd</sup>) this month. urday, September 5<sup>th</sup>. e a can&rsquo;t missed evening, shas extended it one additional week. The last clinic will be the week of August 11<sup>th</sup>. </p><p>&nbsp;Alt (Tuesday through Thursday) </p><p>&nbsp;I wouClub.</p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;<strong></strong></p>'); }, 1); } i am not sure if its some of the formatting that chrome is reject but it seems like if tinymce can parse it in one browser it can do it in any browser so i am confused. any suggestions?

    Read the article

  • Android stream to Wowza

    - by Curtis Kiu
    I feel very confused about Android streaming to wowza. I am doing a video conference using rtmp cross-platform, but Android doesn't eat RTMP. Therefore I need to find another way to do it. Upstreaming I found a new open-source app called spydroid-ipcamera. It is using rtp, sending udp packets to computer, and opens it in vlc using the following sdp v=0 s=Unnamed m=video 5006 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=420016;sprop-parameter-sets=Z0IAFukBQHsg,aM4BDyA=; But it can't work. Then I follow wowza tutorial and stream to it and then play again in VLC. That works! I wrote it in http://code.google.com/p/spydroid-ipcamera/issues/detail?id=2 However when I want to add audio in the packet, it fails to work. I change to code in http://code.google.com/p/spydroid-ipcamera/source/browse/trunk/src/net/mkp/spydroid/CameraStreamer.java mr.setAudioSource(MediaRecorder.AudioSource.MIC); mr.setVideoSource(MediaRecorder.VideoSource.CAMERA); mr.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mr.setVideoFrameRate(20); mr.setVideoSize(640, 480); mr.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); mr.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mr.setPreviewDisplay(holder.getSurface()); Then I thought that the problem should be in sdp, but I don't know how to due with sdp. I am streaming H.264/AAC with Mp4 Second I don't understand sdp. So how can I make video conference upstreaming part using this apps. Android ----(UDP Port:5006)----> PC (SDP file) and then Wowza read the SDP file ------> VLC I think in this way the system cannot handle more than 1 client. sdp can only hold 1 port, any idea or actually it wont' work? Also Wowza need to set the stream before we stream it, so does it mean that I should not follow this way to do it? Sorry my English is poor, I hope you guys understand.

    Read the article

  • angular-ui maps javascript error

    - by Will Lopez
    I'm having an issue with angularui. This error came from angular-google-maps.js: Error: [$compile:ctreq] Controller 'googleMap', required by directive 'rectangle', can't be found! http://errors.angularjs.org/1.2.16/$compile/ctreq?p0=googleMap&p1=rectangle at http://localhost:62874/Scripts/angular.js:78:12 at getControllers (http://localhost:62874/Scripts/angular.js:6409:19) at nodeLinkFn (http://localhost:62874/Scripts/angular.js:6580:35) at compositeLinkFn (http://localhost:62874/Scripts/angular.js:5986:15) at compositeLinkFn (http://localhost:62874/Scripts/angular.js:5989:13) at compositeLinkFn (http://localhost:62874/Scripts/angular.js:5989:13) at nodeLinkFn (http://localhost:62874/Scripts/angular.js:6573:24) at compositeLinkFn (http://localhost:62874/Scripts/angular.js:5986:15) at Scope.publicLinkFn [as $transcludeFn] (http://localhost:62874/Scripts/angular.js:5891:30) at link (http://localhost:62874/Scripts/ui-bootstrap-tpls-0.12.0.min.js:9:8037) <div class="rectangle grid-style ng-scope ng-isolate-scope" data-ng-grid="pipelineGrid"> I'm a little confused because the controller isn't trying to inject the angular-ui map directive: appRoot.controller('PipelineController', ["$scope", "$location", "$resource", function ($scope, $location, $resource) { ... Here's the html: <div class="container"> <tabset> <tab heading="Upload File"> <p>Tab 1 content</p> </tab> <tab heading="Data Maintenance"> Tab 2 content <div ng-controller="PipelineController"> <div id="mapFilter" class="panel panel-default"> <div class="panel-heading text-right"> <div class="input-group"> <input type="text" class="form-control" ng- model="pipelineGrid.filterOptions.filterText" placeholder="enter filter" /> <span class="input-group-addon"><span class="glyphicon glyphicon- filter"></span></span> </div> </div> <div class="panel-body"> <div class="rectangle grid-style" data-ng-grid="pipelineGrid"> </div> </div> </div> </div> </tab> </tabset> </div> Thank you!

    Read the article

  • How to override PHP configuration when running in CGI mode

    - by Fitrah M
    There are some tutorials out there telling me how to override PHP configuration when it is running in CGI mode. But I'm still confused because lots of them assume that the server is running on Linux. While I need to do that also in Windows. My hosting is indeed using Linux but my local development computer is using Windows XP with Xampp 1.7.3. So I need to do that in my local computer first, then I want to change the configuration on hosting server. The PHP in my hosting server is already run as CGI while in my local computer still run as Apache module. At this point, the processes that I understand are: Change PHP to work in CGI mode. I did this by commenting these two line in "httpd-xampp.conf": # LoadFile "C:/xampp/php/php5ts.dll" # LoadModule php5_module modules/php5apache2_2.dll Create "cgi-bin" directory in DocumentRoot. My DocumentRoot is in "D:\www\" (I'm using apache with virtual host). So it is now "D:\www\cgi-bin". Change the default "cgi-bin" directory settings from "C:/xampp/cgi-bin/" to "D:\www\cgi-bin": ScriptAlias /cgi-bin/ "D:/www/cgi-bin/" <Directory "D:\www\cgi-bin"> Options MultiViews Indexes SymLinksIfOwnerMatch Includes ExecCGI AllowOverride All Allow from All </Directory> At this point, my PHP is now running as CGI. I checked this with phpinfo(). It tells me that Server API is now CGI/FastCGI. Now I want to override php configuration. I copied 'php.ini' file to "D:\www\cgi-bin" and modify upload_max_filesize setting from 128M to 10M. Create 'php.cgi' file in "D:\www\cgi-bin" and put these code inside the file: #!/bin/sh /usr/local/cpanel/cgi-sys/php5 -c /home/user/public_html/cgi-bin/ That's it. I'm stuck at this point. All of tutorials tell me to create 'php.cgi' file and put shell code inside the file. How to do the 6th step on Windows? I know the next step is to create handler in .htaccess file to load that 'php.cgi'. And also, because I will also need to change PHP configuration on my hosting server (Linux), is the 6th step above right? Some tutorial tells to insert these line instead of above: #!/bin/sh export PHPRC=/site/ini/1 exec /cgi-bin/php5.cgi I'm sorry if my question is not clear. I'm a new member and this is my first question in this site. Thank you.

    Read the article

  • OnExit is not entering via PostSharp in asp.net project.

    - by mark smith
    Hi there, I have setup PostSharp and it appears to be working but i don't get it entering OnExit (i have logged setup to ensure it is working) ... Its a bit tricky to configure with asp.net - or is it just me ... I am using the 1.5 new version I basically have the following in my web.config and i had to add the SearchPath otherwise it can't find my assemblies <postsharp directory="C:\Program Files\PostSharp 1.5" trace="true"> <parameters> <!--<add name="parameter-name" value="parameter-value"/>--> </parameters> <searchPath> <!-- Always add the binary folder to the search path. --> <add name="bin" value="~\bin"/> </searchPath> </postsharp> I have set tracing on but what is strange to me is that it appears to build to the temp directory, maybe this is my issue, i am unsure .. hence i do F5 ... Is it possible to name the Output directory and output file?? As you can see it is editing a DLL in the temp dir so IIS is no longer in control so it doesn't execute it ??? Confused! :-) C:\Program Files\PostSharp 1.5\postsharp.exe "/P:Output=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.dll" "/P:IntermediateDirectory=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp " /P:CleanIntermediate=False /P:ReferenceDirectory=. /P:SignAssembly=False /P:PrivateKeyLocation= /P:ResolvedReferences= "/P:SearchPath=C:\Source Code\Visual Studio 2008\Projects\mysitemvc\mysitemvc\bin," /V /SkipAutoUpdate "C:\Program Files\PostSharp 1.5\Default.psproj" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\before-postsharp\App_Web_04ae3ewy.dll" PostSharp 1.5 [1.5.6.627] - Copyright (c) Gael Fraiteur, 2005-2009. info PS0035: C:\Windows\Microsoft.NET\Framework\v2.0.50727\ilasm.exe "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.il" /QUIET /DLL /PDB "/RESOURCE=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.res" "/OUTPUT=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.dll" /SUBSYSTEM=3 /FLAGS=1 /BASE=18481152 /STACK=1048576 /ALIGNMENT=512 /MDV=v2.0.50727

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • Filter chain halted as [:login_required] rendered_or_redirected

    - by Magicked
    Hopefully I can explain this well enough, but please let me know if more information is needed! I'm building a form where a user can create an "incident". This incident has the following relationships: belongs_to: customer (customer has_many incidents) belongs_to: user (user has_many incidents) has_one: incident_status (incident_status belongs to incident) The form allows the user to assign the incident to a user (select form) and then select an incident status. The incident is nested in customer. However, I'm getting the following in the server logs: Processing IncidentsController#create (for 127.0.0.1 at 2010-04-26 10:41:33) [POST] Parameters: {"commit"=>"Create", "action"=>"create", "authenticity_token"=>"YhW++vd/dnLoNV/DSl1DULcaWq/RwP7jvLOVx9jQblA=", "customer_id"=>"4", "controller"=>"incidents", "incident"=>{"title"=>"Some Bad Incident", "incident_status_id"=>"1", "user_id"=>"2", "other_name"=>"SS01-042310-001"}} User Load (0.3ms) SELECT * FROM "users" WHERE ("users"."id" = 2) LIMIT 1 Redirected to http://localhost:3000/session/new Filter chain halted as [:login_required] rendered_or_redirected. Completed in 55ms (DB: 0) | 302 Found [http://localhost/customers/4/incidents] It looks to me like it's trying to gather information about the user, even though it already has the id (which is all it needs to create the incident), and the user may not have permission to do a select statement like that? I'm rather confused. Here is the relevant (I think) information in the Incident controller. before_filter :login_required, :get_customer def new @incident = @customer.incidents.build @users = @customer.users @statuses = IncidentStatus.find(:all) respond_to do |format| format.html # new.html.erb format.xml { render :xml => @incident } end end def create @incident = @customer.incidents.build(params[:incident]) respond_to do |format| if @incident.save flash[:notice] = 'Incident was successfully created.' format.html { redirect_to(@incident) } format.xml { render :xml => @incident, :status => :created, :location => @incident } else format.html { render :action => "new" } format.xml { render :xml => @incident.errors, :status => :unprocessable_entity } end end end Just as an FYI, I am using the restful_authentication plugin. So in summary, when I submit the incident creation form, it does not save the incident because it halts. I'm still very new to rails, so my skill at diagnosing problems like this is still very bad. I'm going in circles. :) Thanks in advance for any help. Please let me know if more information is needed and I'll edit it in!

    Read the article

  • Rails AtomFeedBuilder Entry :Url option appears in url tag but not in link tag

    - by Nick
    Hello, I'm using the AtomFeedHelper and everything is working fine except for one feed where I need to link each entry to a URL which is not the default polymorphic_url for the record. Per the documentation I've specified an :url option for the entry. This correctly renders a <url> tag in the atom node but the <link rel="alternate" still points to the default polymorphic_url. Looking at the source and the documentation I don't understand why this is happening. Here's an example builder: atom_feed do |feed| feed.title("Reports") feed.updated(@reports.first.created_at) for report in @reports content = report.notes feed.entry(report) do |entry| entry.title(report.title) entry.content(content, :type => 'html') entry.url("http://myhost/page/") entry.updated(report.updated_at.strftime("%Y-%m-%dT%H:%M:%SZ")) entry.author do |author| author.name(report.user.username) end end end end Here's an example of a problem node: <entry> <id>tag:molly.recargo.com,2005:SiteReport/2</id> <published>2010-03-30T13:11:07-07:00</published> <updated>2010-03-30T13:11:07-07:00</updated> <link rel="alternate" type="text/html" href="http://myhost/site_reports/2"/> <title>Test Title</title> <content type="html">Test Content</content> <url>http://myhost/page/</url> <updated>2010-03-30T13:11:07Z</updated> <author> <name>Author</name> </author> </entry> I wan the href value in the link tag to match the value in the url tag but it does not. When I look at the source listed for entry here http://api.rubyonrails.org/classes/ActionView/Helpers/AtomFeedHelper/AtomFeedBuilder.html I'd assume that this line would work correctly: @xml.link(:rel => 'alternate', :type => 'text/html', :href => options[:url] || @view.polymorphic_url(record)) Confused. Has anyone encountered this before? Thanks all!

    Read the article

  • Asynchronous sockets in C#

    - by IVlad
    I'm confused about the correct way of using asynchronous socket methods in C#. I will refer to these two articles to explain things and ask my questions: MSDN article on asynchronous client sockets and devarticles.com article on socket programming. My question is about the BeginReceive() method. The MSDN article uses these two functions to handle receiving data: private static void Receive(Socket client) { try { // Create the state object. StateObject state = new StateObject(); state.workSocket = client; // Begin receiving the data from the remote device. client.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private static void ReceiveCallback( IAsyncResult ar ) { try { // Retrieve the state object and the client socket // from the asynchronous state object. StateObject state = (StateObject) ar.AsyncState; Socket client = state.workSocket; // Read data from the remote device. int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { // There might be more data, so store the data received so far. state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead)); // Get the rest of the data. client.BeginReceive(state.buffer,0,StateObject.BufferSize,0, new AsyncCallback(ReceiveCallback), state); } else { // All the data has arrived; put it in response. if (state.sb.Length > 1) { response = state.sb.ToString(); } // Signal that all bytes have been received. receiveDone.Set(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } } While the devarticles.com tutorial passes null for the last parameter of the BeginReceive method, and goes on to explain that the last parameter is useful when we're dealing with multiple sockets. Now my questions are: What is the point of passing a state to the BeginReceive method if we're only working with a single socket? Is it to avoid using a class field? It seems like there's little point in doing it, but maybe I'm missing something. How can the state parameter help when dealing with multiple sockets? If I'm calling client.BeginReceive(...), won't all the data be read from the client socket? The devarticles.com tutorial makes it sound like in this call: m_asynResult = m_socClient.BeginReceive (theSocPkt.dataBuffer,0,theSocPkt.dataBuffer.Length, SocketFlags.None,pfnCallBack,theSocPkt); Data will be read from the theSocPkt.thisSocket socket, instead of from the m_socClient socket. In their example the two are one and the same, but what happens if that is not the case? I just don't really see where that last argument is useful or at least how it helps with multiple sockets. If I have multiple sockets, I still need to call BeginReceive on each of them, right?

    Read the article

  • Why does this CSS example use "height: 1%" with "overflow: auto"?

    - by Lawrence Lau
    I am reading a HTML and CSS book. It has a sample code of two-column layout. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <style> #main {height: 1%; overflow: auto;} #main, #header, #footer {width: 768px; margin: auto;} #bodycopy { float: right; width: 598px; } #sidebar {margin-right: 608px; } #footer {clear: both; } </style> </head> <body> <div id="header" style='background-color: #AAAAAA'>This is the header.</div> <div id="main" style='background-color: #EEEEEE'> <div id="bodycopy" style='background-color: #BBBBBB'> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> </div> <div id="sidebar" style='background-color: #CCCCCC'> This is the sidebar. </div> </div> <div id="footer" style='background-color: #DDDDDD'>This is the footer.</div> </body> </html> The author mentions that the use of overflow auto and 1% height will make the main area expand to encompass the computed height of content. I try to remove the 1% height and tried in different browsers but they don't show a difference. I am quite confused of its use. Any idea?

    Read the article

  • iPhone SDK 3/4 App will not run on a iPhone 2.x device even with deployment target set to 2.0!

    - by MarqueIV
    Ok... I know about the difference between the base/active SDKs and the deployment target. I have my base SDK set at 4.0 and the deployment target set at 2.0. I am not using any APIs post 2.x, conditional or otherwise. Since I can't debug on a 2.x device, after building it, I use the iPhone Configuration Utility to install the app on the device, which it does just fine. Problem is, it doesn't run! I just get a blank screen. The main window never comes up! Now before you ask... I had this same problem with the iPhone SDK 3.x. I upgraded to the 4.x hoping it would be solved. It wasn't. Yes the provisioning profile is installed. (Couldn't install the app if it wasn't.) This same compiled app works fine on 3.x devices. Same with 4.x devices. Just not 2.x devices. Again, no I am not using any post-2.x SDKs. To prove this I created a brand-new, window-based app from the 'New Project' dialog and the only changes I made was the background color of the window (to prove the XIB loaded) and I set the deployment target to 2.0 (It's still compiled against the 4.x SDK though.) Again, it runs fine on 3.x or 4.x devices, but just a black, blank screen on 2.x devices. I've tried this on three separate 2.x devices included one freshly restored. I've used three separate dev machines (MacBook Pro with the 3.x SDK, MacBook Pro with the 4.x SDK and a Mac Pro with the 3.x SDK.) Same result every time. I am stumped. The fact that even an unmodified project doesn't run really has me confused. Could it be the XIB file? Did they change the format from 2.x to something newer in the 3.x SDK? If so, how do I set it back to 2.x. (Again, this is just a complete guess.) But I'm really stumped! Mark

    Read the article

  • Vector math, finding coördinates on a planar between 2 vectors

    - by Will Kru
    I am trying to generate a 3d tube along a spline. I have the coördinates of the spline (x1,y1,z1 - x2,y2,z2 - etc) which you can see in the illustration in yellow. At those points I need to generate circles, whose vertices are to be connected at a later stadium. The circles need to be perpendicular to the 'corners' of two line segments of the spline to form a correct tube. Note that the segments are kept low for illustration purpose. [apparently I'm not allowed to post images so please view the image at this link] http://img191.imageshack.us/img191/6863/18720019.jpg I am as far as being able to calculate the vertices of each ring at each point of the spline, but they are all on the same planar ie same angled. I need them to be rotated according to their 'legs' (which A & B are to C for instance). I've been thinking this over and thought of the following: two line segments can be seen as 2 vectors (in illustration A & B) the corner (in illustraton C) is where a ring of vertices need to be calculated I need to find the planar on which all of the vertices will reside I then can use this planar (=vector?) to calculate new vectors from the center point, which is C and find their x,y,z using radius * sin and cos However, I'm really confused on the math part of this. I read about the dot product but that returns a scalar which I don't know how to apply in this case. Can someone point me into the right direction? [edit] To give a bit more info on the situation: I need to construct a buffer of floats, which -in groups of 3- describe vertex positions and will be connected by OpenGL ES, given another buffer with indices to form polygons. To give shape to the tube, I first created an array of floats, which -in groups of 3- describe control points in 3d space. Then along with a variable for segment density, I pass these control points to a function that uses these control points to create a CatmullRom spline and returns this in the form of another array of floats which -again in groups of 3- describe vertices of the catmull rom spline. On each of these vertices, I want to create a ring of vertices which also can differ in density (amount of smoothness / vertices per ring). All former vertices (control points and those that describe the catmull rom spline) are discarded. Only the vertices that form the tube rings will be passed to OpenGL, which in turn will connect those to form the final tube. I am as far as being able to create the catmullrom spline, and create rings at the position of its vertices, however, they are all on a planars that are in the same angle, instead of following the splines path. [/edit] Thanks!

    Read the article

  • progressIndicator does not update till about 10 seconds after awakeFromNib occures

    - by theprojectabot
    I have been trying to get this one section of my UI to immediatly up date when the document loads into view. The awakeFromNib fires the pasted code and then starts a timer to repeat every 10 seconds... I load a default storage location: ~/Movies... which shows up immediately.. yet the network location that is saved in the document that gets pulled from the XML only seems to show up after the second firing of the - (void)updateDiskSpaceDisplay timer. I have set breakpoints and know that the ivars that contain the values that are being put into the *fileSystemAttributes is the network location right when the awakeFromNib occurs... Im confused why it magically appears after the second time firing instead of immediately displaying the write values. - (void)updateDiskSpaceDisplay { // Obtain information about the file system used on the selected storage path. NSError *error = NULL; NSDictionary *fileSystemAttributes = [[NSFileManager defaultManager] attributesOfFileSystemForPath:[[[self settings] containerSettings] storagePath] error:&error]; if( !fileSystemAttributes ) return; // Get the byte capacity of the drive. long long byteCapacity = [[fileSystemAttributes objectForKey:NSFileSystemSize] unsignedLongLongValue]; // Get the number of free bytes. long long freeBytes = [[fileSystemAttributes objectForKey:NSFileSystemFreeSize] unsignedLongLongValue]; // Update the level indicator, and the text fields to show the current information. [totalDiskSpaceField setStringValue:[self formattedStringFromByteCount:byteCapacity]]; [totalDiskSpaceField setNeedsDisplay:YES]; [usedDiskSpaceField setStringValue:[self formattedStringFromByteCount:(byteCapacity - freeBytes)]]; [usedDiskSpaceField setNeedsDisplay:YES]; [diskSpaceIndicator setMaxValue:100]; [diskSpaceIndicator setIntValue:(((float) (byteCapacity - freeBytes) / (float) byteCapacity) * 100.0)]; [diskSpaceIndicator display:YES]; } thoughts? my awakeFromNib: - (void)awakeFromNib { [documentWindow setAcceptsMouseMovedEvents:YES]; [documentWindow setDelegate:self]; [self updateSettingsDisplay]; [self updateDiskSpaceDisplay]; [self setDiskSpaceUpdateTimer:[NSTimer scheduledTimerWithTimeInterval:10.0 target:self selector:@selector(updateDiskSpaceDisplay) userInfo:NULL repeats:YES]]; [self setUpClipInfoTabButtons]; [self performSelector:@selector(setupEngineController) withObject:NULL afterDelay:0.1]; }

    Read the article

  • What is it about DataTable Column Names with dots that makes them unsuitable for WPF's DataGrid cont

    - by Tom Ritter
    Run this, and be confused: <Window x:Class="Fucking_Data_Grids.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <StackPanel> <DataGrid Name="r1" ItemsSource="{Binding Path=.}"> </DataGrid> <DataGrid Name="r2" ItemsSource="{Binding Path=.}"> </DataGrid> </StackPanel> </Window> Codebehind: using System.Data; using System.Windows; namespace Fucking_Data_Grids { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataTable dt1, dt2; dt1 = new DataTable(); dt2 = new DataTable(); dt1.Columns.Add("a-name", typeof(string)); dt1.Columns.Add("b-name", typeof(string)); dt1.Rows.Add(new object[] { 1, "Hi" }); dt1.Rows.Add(new object[] { 2, "Hi" }); dt1.Rows.Add(new object[] { 3, "Hi" }); dt1.Rows.Add(new object[] { 4, "Hi" }); dt1.Rows.Add(new object[] { 5, "Hi" }); dt1.Rows.Add(new object[] { 6, "Hi" }); dt1.Rows.Add(new object[] { 7, "Hi" }); dt2.Columns.Add("a.name", typeof(string)); dt2.Columns.Add("b.name", typeof(string)); dt2.Rows.Add(new object[] { 1, "Hi" }); dt2.Rows.Add(new object[] { 2, "Hi" }); dt2.Rows.Add(new object[] { 3, "Hi" }); dt2.Rows.Add(new object[] { 4, "Hi" }); dt2.Rows.Add(new object[] { 5, "Hi" }); dt2.Rows.Add(new object[] { 6, "Hi" }); dt2.Rows.Add(new object[] { 7, "Hi" }); r1.DataContext = dt1; r2.DataContext = dt2; } } } I'll tell you what happens. The top datagrid is populated with column headers and data. The bottom datagrid has column headers but all the rows are blank.

    Read the article

  • Dropdownlist and Datareader

    - by salvationishere
    After trying many solutions listed on the internet I am very confused now. I have a C#/SQL web application for which I am simply trying to bind an ExecuteReader command to a Dropdownlist so the user can select a value. This is a VS2008 project on an XP OS. How it works is after the user selects a table, I use this selection as an input parameter to a method from my Datamatch.aspx.cs file. Then this Datamatch.aspx.cs file calls a method from my ADONET.cs class file. Finally this method executes a SQL procedure to return the list of columns from that table. (These are all tables in Adventureworks DB). I know that this method returns successfully the list of columns if I execute this SP in SSMS. However, I'm not sure how to tell if it works in VS or not. This should be simple. How can I do this? Here is some of my code. The T-sQL stored proc: CREATE PROCEDURE [dbo].[getColumnNames] @TableName VarChar(50) AS BEGIN SET NOCOUNT ON; SELECT col.name 'COLUMN_NAME' FROM sysobjects obj INNER JOIN syscolumns col ON obj.id = col.id WHERE obj.name = @TableName END It gives me desired output when I execute following from SSMS: exec getColumnNames 'AddressType' And the code from Datamatch.aspx.cs file currently is: // Add DropDownList Control to Placeholder private void CreateDropDownLists() { SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); int NumControls = targettable.Length; DropDownList ddl = new DropDownList(); } Where ADONET_methods.DisplayTableColumns(targettable) is: public static SqlDataReader DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Server=(local);Database=AdventureWorks;Integrated Security = SSPI"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = new SqlCommand("getColumnNames"); //conn2.CreateCommand(); try { cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = "Person." + TableName.Trim(); parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } return dr; }

    Read the article

  • terminate called after throwing an instance of 'std::length_error'

    - by mark
    hello all, this is my first post here. As i am newbie, the problem might be stupid. I was writing a piece of code while the following error message shown, terminate called after throwing an instance of 'std::length_error' what(): basic_string::_S_create /home/gcj/finals /home/gcj/quals where Aborted the following is the offending code especially Line 39 to Line 52. It is weired for me as this block of code is almost same as the Line64 to Line79. int main(){ std::vector<std::string> dirs, need; std::string tmp_str; std::ifstream fp_in("small.in"); std::ofstream fp_out("output"); std::string::iterator iter_substr_begin, iter_substr_end; std::string slash("/"); int T, N, M; fp_in>>T; for (int t = 0; t < T; t++){ std::cout<<" time "<< t << std::endl; fp_in >> N >> M; for (int n =0; n<N; n++){ fp_in>>tmp_str; dirs.push_back(tmp_str); tmp_str.clear(); } for (int m=0; m<M; m++){ fp_in>>tmp_str; need.push_back(tmp_str); tmp_str.clear(); } for (std::vector<std::string>::iterator iter = dirs.begin(); iter!=dirs.end(); iter++){ for (std::string::iterator iter_str = (*iter).begin()+1; iter_str<(*iter).end(); ++iter_str){ if ((*iter_str)=='/') { std::string tmp_str2((*iter).begin(), iter_str); if (find(dirs.begin(), dirs.end(), tmp_str2)==dirs.end()) { dirs.push_back(tmp_str2); } } } } for (std::vector<std::string>::iterator iter_tmp = dirs.begin(); iter_tmp!= dirs.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; dirs.clear(); std::cout<<std::endl; std::cout<<" need "<<std::endl; //processing the next for (std::vector<std::string>::iterator iter_tmp = need.begin(); iter_tmp!=need.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; std::cout<<" where "; for (std::vector<std::string>::iterator iter = need.begin(); iter!=need.end(); iter++){ for (std::string::iterator iter_str = (*iter).begin()+1; iter_str<(*iter).end(); ++iter_str){ if ((*iter_str)=='/') { std::string tmp_str2((*iter).begin(), iter_str); if (find(need.begin(), need.end(), tmp_str2)==need.end()) { need.push_back(tmp_str2); } } } } for (std::vector<std::string>::iterator iter_tmp = need.begin(); iter_tmp!= need.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; need.clear(); std::cout<<std::endl; //finish processing the next } for (std::vector<std::string>::iterator iter= dirs.begin(); iter!=dirs.end(); iter++) std::cout<<*iter<<" "; std::cout<<std::endl; for (std::vector<std::string>::iterator iter= need.begin(); iter!=need.end(); iter++) std::cout<<*iter<<" "; std::cout<<std::endl; fp_out.close(); } best regards, Mark

    Read the article

  • Mercurial hook to disallow committing large binary files

    - by hekevintran
    I want to have a Mercurial hook that will run before committing a transaction that will abort the transaction if a binary file being committed is greater than 1 megabyte. I found the following code which works fine except for one problem. If my changeset involves removing a file, this hook will throw an exception. The hook (I'm using pretxncommit = python:checksize.newbinsize): from mercurial import context, util from mercurial.i18n import _ import mercurial.node as dpynode '''hooks to forbid adding binary file over a given size Ensure the PYTHONPATH is pointing where hg_checksize.py is and setup your repo .hg/hgrc like this: [hooks] pretxncommit = python:checksize.newbinsize pretxnchangegroup = python:checksize.newbinsize preoutgoing = python:checksize.nopull [limits] maxnewbinsize = 10240 ''' def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) tip = context.changectx(repo, 'tip').rev() ctx = context.changectx(repo, node) for rev in range(ctx.rev(), tip+1): ctx = context.changectx(repo, rev) print ctx.files() for f in ctx.files(): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid The exception: $ hg commit -m 'commit message' error: pretxncommit hook raised an exception: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest transaction abort! rollback completed abort: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest! I'm not familiar with writing Mercurial hooks, so I'm pretty confused about what's going on. Why does the hook care that a file was removed if hg already knows about it? Is there a way to fix this hook so that it works all the time? Update (solved): I modified the hook to filter out files that were removed in the changeset. def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) ctx = repo[node] for rev in xrange(ctx.rev(), len(repo)): ctx = context.changectx(repo, rev) # do not check the size of files that have been removed # files that have been removed do not have filecontexts # to test for whether a file was removed, test for the existence of a filecontext filecontexts = list(ctx) def file_was_removed(f): """Returns True if the file was removed""" if f not in filecontexts: return True else: return False for f in itertools.ifilterfalse(file_was_removed, ctx.files()): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • How to Determine the Size of MSADO Command Parameters

    - by Adam
    I am new to MS ADO and trying to understand how to set the size on command parameters as created by the command.CreateParameter (Name, Type, Direction, Size, Value) The documentation says the following: Size Optional. A Long value that specifies the maximum length for the parameter value in characters or bytes. ... If you specify a variable-length data type in the Type argument, you must either pass a Size argument or set the Size property of the Parameter object before appending it to the Parameters collection; otherwise, an error occurs. 1.) What should one pass for fixed-size parameters? Is it a "don't care"? I was a bit confused by the example found here, in which they set size to 3 for an adInteger parameter with Value set to a variant of type VT_I2 pPrmByRoyalty->Type = adInteger; pPrmByRoyalty->Size = 3; pPrmByRoyalty->Direction = adParamInput; pPrmByRoyalty->Value = vtroyal; VT_I2 implies two bytes. A tagVARIANT struct is 16 bytes. How did they land on three? I see that the enum value for adInteger happens to be three, but I suspect that is just a coincidence. So it's a bit confusing what to pass for fixed-size parameters. The team I'm working with has always passed sizeof(int) for adInteger, and it seems to work. Is that correct? Now, for "variable-length" parameters: we are instructed by the documentation to pass "the maximum length .. in characters or bytes". 2.) For adVarChar, is it sufficient to pass the max width as defined in the database? 3.) What about the Wide types (e.g. adVarWChar)? Is it characters or bytes? 4.) How about adVariant, which could contain fixed- or variable-length data? 5.) Do arrays ever come into play here? (we don't pass them as parameters, just curious) Any references or personal insights are welcome.

    Read the article

  • Where would I implement this array to pass?

    - by Keeano Martin
    I currently build an NSMutableArray in Class A.m within the ViewDidLoad Method. - (void)viewDidLoad { [super viewDidLoad]; //Question Array Setup and Alloc stratToolsDict = [[NSMutableDictionary alloc] initWithObjectsAndKeys:countButton,@"count",camerButton,@"camera",videoButton,@"video",textButton,@"text",probeButton,@"probe", nil]; stratTools = [[NSMutableArray alloc] initWithObjects:@"Tools",stratToolsDict, nil]; stratObjectsDict = [[NSMutableDictionary alloc]initWithObjectsAndKeys:stratTools,@"Strat1",stratTools,@"Strat2",stratTools,@"Strat3",stratTools,@"Strat4", nil]; stratObjects = [[NSMutableArray alloc]initWithObjects:@"Strategies:",stratObjectsDict,nil]; QuestionDict = [[NSMutableDictionary alloc]initWithObjectsAndKeys:stratObjects,@"Question 1?",stratObjects,@"Question 2?",stratObjects,@"Question 3?",stratObjects,@"Question 4?",stratObjects,@"Question 5?", nil]; //add strategys to questions QuestionsList = [[NSMutableArray alloc]init]; for (int i = 0; i < 1; i++) { [QuestionsList addObject:QuestionDict]; } NSLog(@"Object: %@",QuestionsList); At the end of this method you will see QuestionsList being initialized and now I need to send this Array to Class B. So I place its setters and getters using the @property and @Synthesize method. Class A.h @property (retain, nonatomic) NSMutableDictionary *stratToolsDict; @property (retain, nonatomic) NSMutableArray *stratTools; @property (retain, nonatomic) NSMutableArray *stratObjects; @property (retain, nonatomic) NSMutableDictionary *QuestionDict; @property (retain, nonatomic) NSMutableArray *QuestionsList; Class A.m @synthesize QuestionDict; @synthesize stratToolsDict; @synthesize stratObjects; @synthesize stratTools; @synthesize QuestionsList; I use the property method because I am going to call this variable from Class B and want to be able to assign it to another NSMutableArray. I then add the @property and @class for Class A to Class B.h as well as declare the NSMutableArray in the @interface. #import "Class A.h" @class Class A; @interface Class B : UITableViewController<UITableViewDataSource, UITableViewDelegate>{ NSMutableArray *QuestionList; Class A *arrayQuestions; } @property Class A *arrayQuestions; Then I call NSMutableArray from Class A in the Class B.m -(id)initWithStyle:(UITableViewStyle)style { if ([super initWithStyle:style] != nil) { //Make array arrayQuestions = [[Class A alloc]init]; QuestionList = arrayQuestions.QuestionsList; Right after this I Log the NSMutableArray to view values and check that they are there and it returns NIL. //Log test NSLog(@"QuestionList init method: %@",QuestionList); Info about Class B- Class B is a UIPopOverController for Class A, Class B has one View which holds a UITableView which I have to populate the results of Class A's NSMutableArray. Why is the NsMutableArray coming back as NIL? Ultimately would like some help figuring it out as well, it seems to really have me confused. Help is greatly appreciated!!

    Read the article

  • How to make and use first object in Objective C to store sender tag

    - by dbonneville
    I started with a sample program that simply sets up a UIView with some buttons on it. Works great. I want to access the sender tag from each button and save it. My first thought was to save it to a global variable but I couldn't get it to work right. I thought the best way would be to create an object with one property and synthesize it so I can get or set it as needed. First, I created GlobalGem.h: #import <Foundation/Foundation.h> @interface GlobalGem : NSObject { int gemTag; } @property int gemTag; -(void) print; @end Then I created GlobalGem.m: #import "GlobalGem.h" @implementation GlobalGem @synthesize gemTag; -(void) print { NSLog(@"gemTag value is %i", gemTag); } @end The sample code I worked from doesn't do anything I can see in "main" where the program initializes. They just have their methods and are set up to handle IBOutlet actions. I want to use this object in the IBOutlet methods. But I don't know where to create the object. In the viewDidLoad, I tried this: GlobalGem *aGem = [[GlobalGem alloc]init]; [aGem print]; I added reference to the .h file of course. The print method works. In the nslog I get "gemTag value is 0". But now I want to use the instance aGem in some of the IBOutlet actions. For instance, if I put this same method in one of the outlet actions like this: - (IBAction)touchButton:(id)sender { [aGem print]; } ...I get a build error saying that "aGem is undeclared". Yes, I'm new to objective c and am very confused. Is the instance I created, aGem, in the viewDidLoad not accessible outside of that method? How would I create an instance of a Class that I can store a variable value that any of the IBOutlet methods can access? All the buttons need to be able to store their sender tag in the instance aGem and I'm stuck. As I mentioned earlier, I was able to write the sender tag to a global variable I declared in the UIView .h file that came with the program, but I ran into issues using it. Where am I off?

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • How to add image in ListView android

    - by Wawan Den Frastøtende
    i would like to add images on my list view, and i have this code package com.wilis.appmysql; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; //import android.util.Log; import android.view.View; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.Toast; public class menulayanan extends ListActivity { /** Called when the activity is first created. */ public void onCreate(Bundle icicle) { super.onCreate(icicle); // Create an array of Strings, that will be put to our ListActivity String[] menulayanan = new String[] { "Berita Terbaru", "Info Item", "Customer Service", "Help","Exit"}; //Menset nilai array ke dalam list adapater sehingga data pada array akan dimunculkan dalam list this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,menulayanan)); } @Override /**method ini akan mengoveride method onListItemClick yang ada pada class List Activity * method ini akan dipanggil apabilai ada salah satu item dari list menu yang dipilih */ protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); // Get the item that was clicked // Menangkap nilai text yang dklik Object o = this.getListAdapter().getItem(position); String pilihan = o.toString(); // Menampilkan hasil pilihan menu dalam bentuk Toast tampilkanPilihan(pilihan); } /** * Tampilkan Activity sesuai dengan menu yang dipilih * */ protected void tampilkanPilihan(String pilihan) { try { //Intent digunakan untuk sebagai pengenal suatu activity Intent i = null; if (pilihan.equals("Berita Terbaru")) { i = new Intent(this, PraBayar.class); } else if (pilihan.equals("Info Item")) { i = new Intent(this, PascaBayar.class); } else if (pilihan.equals("Customer Service")) { i = new Intent(this, CustomerService.class); } else if (pilihan.equals("Help")) { i = new Intent(this, Help.class); } else if (pilihan.equals("Exit")) { finish(); } else { Toast.makeText(this,"Anda Memilih: " + pilihan + " , Actionnya belum dibuat", Toast.LENGTH_LONG).show(); } startActivity(i); } catch (Exception e) { e.printStackTrace(); } } } and i want to add different image per list, so i mean is i want to add a.png to "Berita Terbaru", b.png to "Info Item", c.png "Customer Service" , so how to do it? i was very confused about this, thanks before...

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >