Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 517/677 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • ASPNET MVC - Override Html.TextBoxFor(model.property) with a new helper with same signature?

    - by JK
    I want to override Html.TextBoxFor() with my own helper that has the exact same signature (but a different namespace of course) - is this possible, and if so, how? The reason for this is that I have 100+ views in an already existing app, and I want to change the behaviour of TextBoxFor so that it outputs a maxLength=n attribute if the property has a [StringLength(n)] annotation. The code for automatically outputting maxlength=n is in this question: http://stackoverflow.com/questions/2386365/maxlength-attribute-of-a-text-box-from-the-dataannotations-stringlength-in-mvc2. But my question is not a duplicate - I am trying creating a more generic solution: where the DataAnnotaion flows into the html automatically without any need for additional code by the person writing the view. In the referenced question, you have to change every single Html.TexBoxFor to a Html.CustomTextBoxFor. I need to do it so that the existing TextBoxFor()'s do not need to be changed - hence creating a helper with the same signature: change the behaviour of the helper method, and all existing instances will just work without any changes (100+ views, at least 500 TextBoxFor()s - don't want to manually edit that). I tried this code: (And I need to repeat it for each overload of TextBoxFor, but once the root problem is solved, that will be trivial) namespace My.Helpers { public static class CustomTextBoxHelper { public static MvcHtmlString TextBoxFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, object htmlAttributes, bool includeLengthIfAnnotated) { // implementation here } } } But I am getting a compiler error in the view on Html.TextBoxFor(): "The call is ambiguous between the following methods or properties" (of course). Is there any way to do this? Is there an alternative approach that would allow me to change the behaviour of Html.TextBoxFor, so that the views that already use it do not need to be changed?

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Databinding a multiselect ListBox in a FormView control

    - by drs9222
    I have a multiselect ListBox within a FormView. I can't figure out how to setup the databinding for it. I can populate the listbox fine. If the listbox was a single select I could use say SelectValue='<%#Bind("col")%>' and that works fine. Is there something that I can bind to for a multiselect listbox? I've tried manually DataBinding by handling the DataBinding event of the listbox and setting the selected property on the appropriate items. Unfortunately, there are no items in the listbox during the DataBinding event. The closest thing I've found is to save the value that determines what items should be selecting during DataBinding and then use that value in the FormViews DataBinding event to select the right items which seems like a hack to me. Is there a better way? EDIT: To clarify what I am currently doing... I am using the FormViews's ItemCreated event to save the FormView's DataItem. Then in the FormView's DataBound event I find the listbox and manually set the selected items. It doesn't seem right that I have to save the value like this and I assume there is a more correct way to do this that I just can't see.

    Read the article

  • DataView.RowFilter Vs DataTable.Select() vs DataTable.Rows.Find()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance Googling on this topic I found some articles/forum topics which mention Datatable.Select() itself is quite buggy(not sure on this) and underperforms in various scenarios. On this(Best Practices ADO.NET) topic on msdn it is suggested that if there is primary key defined on a datatable the findrows() or find() methods should be used insted of Datatable.Select(). This article here (.NET 1.1) benchmarks all the three approaches plus a couple more. But this is for version 1.1 so not sure if these are valid still now. Accroding to this DataRowCollection.Find() outperforms all approaches and Datatable.Select() outperforms DataView.RowFilter. So I am quite confused on what might be the best approach on finding rows in a datatable. Or there is no single good way to do this, multiple solutions exist depending upon the scenario?

    Read the article

  • Android multiple email attachment using Intent question

    - by yyyy1234
    Hi, I've been working on Android program to send email with attachment (image file , audio file , etc) using Intent with (ACTION_SEND) . THe program is working when email has a single attachment. I used Intent.putExtra( android.content.Intent.EXTRA_STREAM, uri) to attach the designated image file to the mail and it is wokring fine , the mail can be delivered through Gmail . However, when I tried to have multiple images attached to the same mail by calling Intent.putExtra( android.content.Intent.EXTRA_STREAM, uri) multiple times, it failed to work. None of the attachment show up in the email . I searched the SDK documentation and Android programming user group about email attachment but cannot find any related info. However, I've discovered that there's another intent constnat ACTION_SEND_MULTIPLE (available since API level 4 ) which might meet my requirement. Based on SDK documentation, it simply states that it deliver multiple data to someone else , it works like ACTION_SEND , excpet the data is multiple. But I still could not figure out the correct usage for this command. I tried to delcare intent with ACTION_SEND_MULTIPLE , then call putExtra(EXTRA_STREAM, uri) multiple times to attach multiple images , but I got the same errnoenous result just like before, none of the attachment show up in the email. Has anyone tried with ACTION_SEND_MULTIPLE and got it working with multiple email attachment ? Thanks very much for your help. Regards, yyyy1234

    Read the article

  • Abcpdf throwing System.ExecutionEngineException

    - by Tom Tresansky
    I have the binary for several pdf files stored in a collection of Byte arrays. My goal is to concatenate them into a single .pdf file using abcpdf, then stream that newly created file to the Response object on a page of an ASP.Net website. Had been doing it like this: BEGIN LOOP ... 'Create a new Doc Dim doc As Doc = New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(Doc) END LOOP Which was working fine 95% of the time. Every now and then however, creating a new Doc object would throw a System.ExecutionEngineException and crash the CLR. It didn't seem to be related to a large number of pdfs (sometimes would happen w/ only 2), or with large sized pdfs. It seemed almost completely random. This is a known bug in abcpdf described (not very well) here Item 6.24. I came across a helpful SO post which suggested using a Using block for the abcpdf Doc object. So now I'm doing this: Using doc As New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(doc) End Using And I haven't seen the problem occur again yet, and have been pounding on a test version as best as I can to get it to. Has anyone had any similar experience with this error? Did this fix it?

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

  • JQgrid - Get Row Number instead of ID

    - by mariojjsimoes
    Hello, all, I am creating a JQGrid from a database table that does not contain a single field primary key. Therefore, the field i am supplying as id is not unique and the same one exists in several rows. Because of this, when passing a reference to the data with ondblClickRow to a function external to the grid i need to use the rownumber and not the id. To test, I'm using ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, , and i should be getting and alert with the row number, except that it isn't working. I've been over the documentation and can't understand what i am doing wrong... Any help would be greatly appreciated! Thanks in advance, Mario. Bellow is my full grid: jQuery(document).ready(function(){ var mygrid = jQuery("#grid1").jqGrid({ datatype: 'xmlstring', datastr : grid1RsXML, width: 1024, height: 500, colNames:['DEVICE_ID','JOB_SIZE_IN_BYTES', 'USER_NAME','HOST_NAME','DAY_OF_WEEK','JOB_ID'], colModel:[ {name:'DEVICE_ID',index:'DEVICE_ID', width:55, sortable:true}, {name:'JOB_SIZE_IN_BYTES',index:'JOB_SIZE_IN_BYTES', width:40, sortable:true}, {name:'USER_NAME',index:'USER_NAME', width:60, sortable:true}, {name:'HOST_NAME',index:'HOST_NAME', width:50,align:"right", sortable:true}, {name:'DAY_OF_WEEK',index:'DAY_OF_WEEK', width:10, sortable:true}, {name:'JOB_ID',index:'JOB_ID', width:30, sortable:true} ], rowNum:1000, autowidth: true, //rowList:[10,20,30], rowList:[1], pager: '#grid1Pager', sortname: 'DEVICE_ID', viewrecords: true, rownumbers: true, sortorder: "desc", sortable: true, gridview : true, xmlReader: { root : "recordset", row: "record", repeatitems: false, id: "DEVICE_ID" }, caption:"All Jobs - Double Click for detailed history", ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, toolbar: [true,"top"], url: grid1RsXML });

    Read the article

  • Linq The specified type 'string' is not a valid provider type.

    - by Joe Pitz
    Using Linq to call a stored procedure that passes a single string, The stored procedure returns a data set row that contains a string and an int. Code: PESQLDataContext pe = new PESQLDataContext(strConnStr); pe.ObjectTrackingEnabled = false; gvUnitsPassed.DataSource = pe.PassedInspection(Line); gvUnitsPassed.DataBind(); pe.dispose(); When the code runs an exception gets called below: The exception is thrown at the IExecuteResult result = statement: Enclosed is my result class in the designer.cs file. [Function(Name = "dbo.PassedInspection")] public ISingleResult<PassedInspectionResult> PassedInspection([Parameter(Name = "Model", DbType = "VarChar(4)")] string model) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), model); return ((ISingleResult<PassedInspectionResult>)(result.ReturnValue)); } public partial class PassedInspectionResult { private string _Date; private int _Passed; public PassedInspectionResult() { } [Column(Storage = "_Date", DbType = "string NULL")] public string Date { get { return this._Date; } set { if ((this._Date != value)) { this._Date = value; } } } [Column(Storage = "_Passed", DbType = "Int NULL")] public int Passed { get { return this._Passed; } set { if ((this._Passed != value)) { this._Passed = value; } } } } } I have other stored procedures with similar arguments that run just fine. Thanks

    Read the article

  • ZF2 Zend\Db Insert/Update Using Mysql Expression (Zend\Db\Sql\Expression?)

    - by Aleross
    Is there any way to include MySQL expressions like NOW() in the current build of ZF2 (2.0.0beta4) through Zend\Db and/or TableGateway insert()/update() statements? Here is a related post on the mailing list, though it hasn't been answered: http://zend-framework-community.634137.n4.nabble.com/Zend-Db-Expr-and-Transactions-in-ZF2-Db-td4472944.html It appears that ZF1 used something like: $data = array( 'update_time' => new \Zend\Db\Expr("NOW()") ); And after looking through Zend\Db\Sql\Expression I tried: $data = array( 'update_time' => new \Zend\Db\Sql\Expression("NOW()"), ); But get the following error: Catchable fatal error: Object of class Zend\Db\Sql\Expression could not be converted to string in /var/www/vendor/ZF2/library/Zend/Db/Adapter/Driver/Pdo/Statement.php on line 256 As recommended here: http://zend-framework-community.634137.n4.nabble.com/ZF2-beta3-Zend-Db-Sql-Insert-new-Expression-now-td4536197.html I'm currently just setting the date with PHP code, but I'd rather use the MySQL server as the single source of truth for date/times. $data = array( 'update_time' => date('Y-m-d H:i:s'), ); Thanks, I appreciate any input!

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • How do I pass a custom field to a hook (Invision Power Board [ipb] / PHP)

    - by Julian Young
    A long shot but here's hoping someone has some experience coding PHP hooks for Invisions Power Board forum. I'm attempting to code a status addition and the PHP works fine on it's own, it's the passing of the IPB's reference to my hook that is the issue. I.E. You setup a custom field in your forum for MSN Username, then from within a skin / template hook you pass the custom field to the hook and then use your PHP code to check on the status. Here is the IPB skin code I am hooking into on Global-userInfoPane... <if test="authorcfields:|:$author['custom_fields'] != """> <foreach loop="customFieldsOuter:$author['custom_fields'] as $group => $data"> <foreach loop="customFields:$author['custom_fields'][ $group ] as $field"> <if test="$field != ''"> <li> {$field} </li> </if> </foreach> </foreach> </if> Although I could easily add my own skin hook here. i.e. <if test="myHookHere:|:1===1"></if> Literally all I need is a single custom field entry from here passed to my hook. If I query every member when the hook is run then that will result in many extra sql queries per page view. All I want to do is pass that specific custom field to the hook... i.e. myHookHere( $customfield['msn_username'] ) Is this possible? How do you reference the customfield? Can I execute pure PHP from here? Appreciate anyone that can help! I tried the official invision forums but not had much luck.

    Read the article

  • Mixed surrogate composite key insert in JPA 2.0, PostgreSQL and Hibernate 3.5

    - by Gerald
    First off, we are using JPA 2.0 and Hibernate 3.5 as persistence provider on a PostgreSQL database. We successfully use the sequence of the database via the JPA 2.0 annotations as an auto-generated value for single-field-surrogate-keys and all works fine. Now we are implementing a bi-temporal database-scheme that requires a mixed key in the following manner: Table 1: id (pk, integer, auto-generated-sequence) validTimeBegin (pk, dateTime) validTimeEnd (dateTime) firstName (varChar) Now we have a problem. You see, if we INSERT a new element, the field id is auto-generated and that's fine. Only, if we want to UPDATE the field within this scheme, then we have to change the validTimeBegin column WITHOUT changing the id-field and insert it as a new row like so: BEFORE THE UPDATE OF THE ROW: |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | NULL | Gerald | |---|-------------------------|-------------------------|-------------------| AFTER THE UPDATE OF THE ROW happening at exactly 2010-05-01-10:35:01.788 server-time: (we update the person with the id:1 to reflect his new first name...) |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | 2010-05-01-10:35:01.788 | Gerald | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:35:01.788 | NULL | Jerry | |---|-------------------------|-------------------------|-------------------| So our problem is, that this doesn't work at all using an auto-generated-sequence for the field id because when inserting a new row then the id ALWAYS is auto-generated although it really is part of a composite key which should sometimes behave differently. So my question is: Is there a way to tell hibernate via JPA to stop auto-generating the id-field in the case I want to generate a new variety of the same person and go on as usual in every other case or do I have to take over the whole id-generation with custom code? Thanks in advance, Gerald

    Read the article

  • Alternative to FindAncestor RelativeSource in Silverlight 4 to bind to a property of the page

    - by TimothyP
    Hi, FindAncestor RelativeSource only supports 'Self' and 'TemplatedParent', but I have to bind the width of a popup to the width of the page. Giving the page a name causes problems because sometimes it will throw exceptions saying a control with that name is already present in the visual tree. <Popup IsOpen="True" Width="{Binding ElementName=BordPage, Path=Width}" Height="{Binding ElementName=BordPage, Path=Height}"> Background information: I'm using a SL4 navigation based application here. BordPage is a navigation page, which I'm using multiple times within the application. So giving it a name in the page itself is not really a good idea, but I don't know how else I can bind to the width and height of the page. What I'm trying to do is have a black border (with opacity 0.8) cover the entire screen, (including the controls of the MainPage). Then on top of that I want to display some other controls. Since the application is touch controlled, providing the user with a ComboBox to select a value doesn't really work wel. Instead I want to show this black overlay window with a listbox taking up most of the screen so the user can simply touch the value he wants with a single click. Update: I just realized I can use the ChildWindow class to do this. But my original question remains.

    Read the article

  • Creating an IIS 6 Virtual Directory with PowerShell v2 over WMI/ADSI

    - by codepoke
    I can create an IISWebVirtualDir or IISWebVirtualDirSetting with WMI, but I've found no way to turn the virtual directory into an IIS Application. The virtual directory wants an AppFriendlyName and a Path. That's easy because they're part of the ...Setting object. But in order to turn the virtual directory into an App, you need to set AppIsolated=2 and AppRoot=[its root]. I cannot do this with WMI. I'd rather not mix ADSI and WMI, so if anyone can coach me through to amking this happen in WMI I'd be very happy. Here's my demo code: $server = 'serverName' $site = 'W3SVC/10/ROOT/' $app = 'AppName' # If I use these args, the VirDir is not created at all. Fails to write read-only prop # $args = @{'Name'=('W3SVC/10/ROOT/' + $app); ` # 'AppIsolated'=2;'AppRoot'='/LM/' + $site + $app} # So I use this single arg $args = @{'Name'=($site + $app)} $args # Look at the args to make sure I'm putting what I think I am $v = set-wmiinstance -Class IIsWebVirtualDir -Authentication PacketPrivacy ` -ComputerName $server -Namespace root\microsoftiisv2 -Arguments $args $v.Put() # VirDir now exists # Pull the settings object for it and prove I can tweak it $filter = "Name = '" + $site + $app + "'" $filter $v = get-wmiobject -Class IIsWebVirtualDirSetting -Authentication PacketPrivacy ` -ComputerName $server -Namespace root\microsoftiisv2 -Filter $filter $v.AppFriendlyName = $app $v.Put() $v # Yep. Changes work. Goes without saying I cannot change AppIsolated or AppRoot # But ADSI can change them without a hitch # Slower than molasses in January, but it works $a = [adsi]("IIS://$server/" + $site + $app) $a.Put("AppIsolated", 2) $a.Put("AppRoot", ('/LM/' + $site + $app)) $a.Put("Path", "C:\Inetpub\wwwroot\news") $a.SetInfo() $a Any thoughts?

    Read the article

  • .NET "must-have" development tools

    - by nzpcmad
    James Avery wrote a classic article a while back entitled Ten Must-Have Tools Every Developer Should Download Now which is a companion to Visual Studio Add-Ins Every Developer Should Download Now and Scott Hanselman has an excellent list on his blog but if you were on a desert island and were only allowed three .NET development tools which ones would you pick? Update: Assuming you already have an IDE like Visual Studio ... Update (5) : Up to 08/01 : The current state of play: Reflector 13 Resharper 9 NUnit + TestDriven.Net 7 Refactor Pro 4 Process Explorer (other Sysinternals) 3 SnippetCompiler 3 CodeRush 3 MSDN Library 2 LinqPad 2 Cruisecontrol.net 2 VMWare 2 RhinoMocks 2 Fiddler 2 PowerShell 2 PowerCommands for VS 2008 1 Sandcastle 1 SQL Profiler 1 Redgate ANTS profiler 11 NCover 1 VisualSVN 1 Rubber Ducky 1 WinMerge 1 NAnt 1 ViEmu 1 AnkhSVN 1 dotTrace Profiler 1 BeyondCompare 1 DPack VS Plugin 1 WCF Trace Viewer (SDK) 1 xUnit.net 1 SourceGear DiffMerge 1 Ghostdoc 1 Expression Studio 1 XAML Pad 1 KaXaml 1 Blender for 3D modeling 1 Snoop a WPF tool 1 DiffMerge 1 DPack 1 NDepend 1 Kodos 1 WatiN 1 HTTPWatch Basic Edition 1 Paint.Net 1 Mole For VS 1 What I find particularly interesting about this is that "NUnit + TestDriven.Net " is right up there in third place which shows the growing emphasis on testing as an integral part of the development process rather than as an adjunct which is simply bolted on. And I'm somewhat perplexed that Codesmith didn't receive a single vote?

    Read the article

  • iPhone: how to use performSelector:onThread:withObject:waitUntilDone: method?

    - by Michael Kessler
    Hi all, I am trying to use a separate thread for working with some API. The problem is that I am not able to use performSelector:onThread:withObject:waitUntilDone: method with a thread that I' instantiated for this. My code: @interface MyObject : NSObject { NSThread *_myThread; } @property(nonatomic, retain) NSThread *myThread; @end @implementation MyObject @synthesize myThread = _myThread; - (NSThread *)myThread { if (_myThread == nil) { NSThread *myThreadTemp = [[NSThread alloc] init]; [myThreadTemp start]; self. myThread = myThreadTemp; [myThreadTemp release]; } return _myThread; } - (id)init { if (self = [super init]) { [self performSelector:@selector(privateInit:) onThread:[self myThread] withObject:nil waitUntilDone:NO]; } return self; } - (void)privateInit:(id)object { NSLog(@"MyObject - privateInit start"); } - (void)dealloc { [_myThread release]; _myThread = nil; [super dealloc]; } @end "MyObject - privateInit start" is never printed. What am I missing? I tried to instantiate the thread with target and selector, tried to wait for method execution completion (waitUntilDone:YES). Nothing helps. UPDATE: I don't need this multithreading for separating costly operations to another thread. In this case I could use the performSelectorInBackground as mentioned in few answers. The main reason for this separate thread is the need to perform all the actions in the API (TTS by Loquendo) from one single thread. Meaning that I have to create an instance of the TTS object and call methods on that object from the same thread all the time.

    Read the article

  • How do you load and save content from a Silverlight 4 RichTextBox control?

    - by AnthonyWJones
    I've been reviewing the features of the RichTextBox control in Silverlight 4. What I've yet to find is any examples of Loading and Saving content in the RichTextBox. Anyone come across any or can shed some light on it? The control has a BlocksCollection in which I guess one could use the XamlReader to load a bunch of markup assuming that markup has a single top level node of type Block. Then add that block to the Blocks collection. Seems a shame that the RichTextBox bothers to have a "collection" in this case, why not simply a top-level Block item? Never-the-less that still leaves saving the content of a RichTextBox, I have no idea where to begin with that one? I'm sure I must be missing the obvious here but unless loading and saving data in to and from RichTextBox is at least possible if not easy I can't see how we can actually put it to use. Edit Thanks to DaveB's answer I found discussion of something called the DocumentPersister. However no reference to this class can be found in the MSDN documentation nor can I find it in the installed dlls via object browser search. Anyone, anyone at all?

    Read the article

  • Why were namespaces removed from ECMAScript consideration?

    - by Bob
    Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message: One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation? Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.

    Read the article

  • Data Modeling: Logical Modeling Exercise

    - by swisscheese
    In trying to learn the art of data storage I have been trying to take in as much solid information as possible. PerformanceDBA posted some really helpful tutorials/examples in the following posts among others: is my data normalized? and Relational table naming convention. I already asked a subset question of this model here. So to make sure I understood the concepts he presented and I have seen elsewhere I wanted to take things a step or two further and see if I am grasping the concepts. Hence the purpose of this post, which hopefully others can also learn from. Everything I present is conceptual to me and for learning rather than applying it in some production system. It would be cool to get some input from PerformanceDBA also since I used his models to get started, but I appreciate all input given from anyone. As I am new to databases and especially modeling I will be the first to admit that I may not always ask the right questions, explain my thoughts clearly, or use the right verbage due to lack of expertise on the subject. So please keep that in mind and feel free to steer me in the right direction if I head off track. If there is enough interest in this I would like to take this from the logical to physical phases to show the evolution of the process and share it here on Stack. I will keep this thread for the Logical Diagram though and start new one for the additional steps. For my understanding I will be building a MySQL DB in the end to run some tests and see if what I came up with actually works. Here is the list of things that I want to capture in this conceptual model. Edit for V1.2 The purpose of this is to list Bands, their members, and the Events that they will be appearing at, as well as offer music and other merchandise for sale Members will be able to match up with friends Members can write reviews on the Bands, their music, and their events. There can only be one review per member on a given item, although they can edit their reviews and history will be maintained. BandMembers will have the chance to write a single Comment on Reviews about the Band they are associated with. Collectively as a Band only one Comment is allowed per Review. Members can then rate all Reviews and Comments but only once per given instance Members can select their favorite Bands, music, Merchandise, and Events Bands, Songs, and Events will be categorized into the type of Genre that they are and then further subcategorized into a SubGenre if necessary. It is ok for a Band or Event to fall into more then one Genre/SubGenre combination. Event date, time, and location will be posted for a given band and members can show that they will be attending the Event. An Event can be comprised of more than one Band, and multiple Events can take place at a single location on the same day Every party will be tied to at least one address and address history shall be maintained. Each party could also be tied to more then one address at a time (i.e. billing, shipping, physical) There will be stored profiles for Bands, BandMembers, and general members. So there it is, maybe a bit involved but could be a great learning tool for many hopefully as the process evolves and input is given by the community. Any input? EDIT v1.1 In response to PerformanceDBA U.3) That means no merchandise other than Band merchandise in the database. Correct ? That was my original thought but you got me thinking. Maybe the site would want to sell its own merchandise or even other merchandise from the bands. Not sure a mod to make for that. Would it require an entire rework of the Catalog section or just the identifying relationship that exists with the Band? Attempted a mod to sell both complete albums or song. Either way they would both be in electronic format only available for download. That is why I listed an Album as being comprised of Songs rather then 2 separate entities. U.5) I understand what you bring up about the circular relation with Favorite. I would like to get to this “It is either one Entity with some form of differentiation (FavoriteType) which identifies its treatment” but how to is not clear to me. What am I missing here? u.6) “Business Rules This is probably the only area you are weak in.” Thanks for the honest response. I will readdress these but I hope to clear up some confusion in my head first with the responses I have posted back to you. Q.1) Yes I would like to have Accepted, Rejected, and Blocked. I am not sure what you are referring to as to how this would change the logical model? Q.2) A person does not have to be a User. They can exist only as a BandMember. Is that what you are asking? Minor Issue Zero, One, or More…Oops I admit I forgot to give this attention when building the model. I am submitting this version as is and will address in a future version. I need to read up more on Constraint Checking to make sure I am understanding things. M.4) Depends if you envision OrderPurchase in the future. Can you expand as to what you mean here? EDIT V1.2 In response to PerformanceDBA input... Lessons learned. I was mixing the concept of Identifying / Non-Identifying and Cardinality (i.e. Genre / SubGenre), and doing so inconsistently to make things worse. Associative Tables are not required in Logical Diagrams as their many-to-many relationships can be depicted and then expanded in the Physical Model. I was overlooking the Cardinality in a lot of the relationships The importance of reading through relationships using effective Verb Phrases to reassure I am modeling what I want to accomplish. U.2) In the concept of this model it is only required to track a Venue as a location for an Event. No further data needs to be collected. With that being said Events will take place on a given EventDate and will be hosted at a Venue. Venues will host multiple events and possibly multiple events on a given date. In my new model my thinking was that EventDate is already tied to Event . Therefore, Venue will not need a relationship with EventDate. The 5th and 6th bullets you have listed under U.2) leave me questioning my thinking though. Am I missing something here? U.3) Is it time to move the link between Item and Band up to Item and Party instead? With the current design I don't see a possibility to sell merchandise not tied to the band as you have brought up. U.5) I left as per your input rather than making it a discrete Supertype/Subtype Relationship as I don’t see a benefit of having that type of roll up. Additional Revisions AR.1) After going through the exercise for FavoriteItem, I feel that Item to Review requires a many-to-many relationship so that is indicated. Necessary? Ok here we go for v1.3 I took a few days on this version, going back and forth with my design. Once the logical process is complete, as I want to see if I am on the right track, I will go through in depth what I had learned and the troubles I faced as a beginner going through this process. The big point for this version was it took throwing in some Keys to help see what I was missing in the past. Going through the process of doing a matrix proved to be of great help also. Regardless of anything, if it wasn't for the input given by PerformanceDBA I would still be a lost soul wondering in the dark. Who knows my current design might reaffirm that I still am, but I have learned a lot so I am know I at least have a flashlight in my hand. At this point in time I admit that I am still confused about identifying and non-identifying relationships. In my model I had to use non-identifying relationships with non nulls just to join the relationships I wanted to model. In reading a lot on the subject there seems to be a lot of disagreement and indecisiveness on the subject so I did what I thought represented the right things in my model. When to force (identifying) and when to be free (non-identifying)? Anyone have inputs? EDIT V1.4 Ok took the V1.3 inputs and cleaned things up for this V1.4 Currently working on a V1.5 to include attributes.

    Read the article

  • XNA vs SlimDX for offscreen renderer

    - by Groky
    Hello, I realise there are numerous questions on here asking about choosing between XNA and SlimDX, but these all relate to game programming. A little background: I have an application that renders scenes from XML descriptions. Currently I am using WPF 3D and this mostly works, except that WPF has no way to render scenes offscreen (i.e. on a server, without displaying them in a window), and also rendering to a bitmap causes WPF to fallback to software rendering. So I'm faced with having to write my own renderer. Here are the requirements: Mix of 3D and 2D elements. Relatively few elements per scene (tens of meshes, tens of 2D elements). Large scenes (up to 3000px square for print). Only a single frame will be rendered (i.e. FPS is not an issue). Opacity masks. Pixel shaders. Software fallback (servers may or may not have a decent gfx card). Possibility of being rendered offscreen. As you can see it's pretty simple stuff and WPF can manage it quite nicely except for the not-being-able-to-export-the-scene problem. In particular I don't need many of the things usually needed in game development. So bearing that in mind, would you choose XNA or SlimDX? The non-rendering portion of the code is already written in C#, so want to stick with that.

    Read the article

  • Code Golf: Code 39 Bar Code

    - by gwell
    The challenge The shortest code by character count to draw an ASCII representation of a Code 39 bar code. Wikipedia article about Code 39: http://en.wikipedia.org/wiki/Code_39 Input The input will be a string of legal characters for Code 39 bar codes. This means 43 characters are valid: 0-9 A-Z (space) and -.$/+%. The * character will not appear in the input as it is used as the start and stop characters. Output Each character encoded in Code 39 bar codes have nine elements, five bars and four spaces. Bars will be represented with # characters, and spaces will be represented with the space character. Three of the nine elements will be wide. The narrow elements will be one character wide, and the wide elements will be three characters wide. A inter-character space of a single space should be added between each character pattern. The pattern should be repeated so that the height of the bar code is eight characters high. The start/stop character * (bWbwBwBwb) would be represented like this: # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # ^ ^ ^^ ^ ^ ^ ^^^ | | || | | | ||| narrow bar -+ | || | | | ||| wide space ---+ || | | | ||| narrow bar -----+| | | | ||| narrow space ------+ | | | ||| wide bar --------+ | | ||| narrow space ----------+ | ||| wide bar ------------+ ||| narrow space --------------+|| narrow bar ---------------+| inter-character space ----------------+ The start and stop character * will need to be output at the start and end of the bar code. No quiet space will need to be included before or after the bar code. No check digit will need to be calculated. Full ASCII Code39 encoding is not required, just the standard 43 characters. No text needs to be printed below the ASCII bar code representation to identify the output contents. The character # can be replaced with another character of higher density if wanted. Using the full block character U+2588, would allow the bar code to actually scan when printed. Test cases Input: ABC Output: # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # Input: 1/3 Output: # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # Input: - $ (minus space dollar) Output: # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # Code count includes input/output (full program).

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >