Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 238/1338 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Close application on error

    - by poke
    I’m currently writing an application for the Android platform that requires a mounted SD card (or ExternalStorage). I know that it might not be the best way to require something like that, but the application will work with quite a lot of data, and I don’t even want to think about storing that on the device’s storage. Anyway, to ensure that the application won’t run without the external storage, I do a quick check in the activity’s onCreate method. If the card is not mounted, I want to display an error message and then quit the application. My current approach looks like this: public void onCreate ( Bundle savedInstanceState ) { super.onCreate( savedInstanceState ); setContentView( R.layout.main ); try { // initialize data storage // will raise an exception if it fails, or the SD card is not mounted // ... } catch ( Exception e ) { AlertDialog.Builder builder = new AlertDialog.Builder( this ); builder .setMessage( "There was an error: " + e.getMessage() ) .setCancelable( false ) .setNeutralButton( "Ok.", new DialogInterface.OnClickListener() { public void onClick ( DialogInterface dialog, int which ) { MyApplication.this.finish(); } } ); AlertDialog error = builder.create(); error.show(); return; } // continue ... } When I run the application, and the exception gets raised (I raise it manually to check if everything works), the error message is displayed correctly. However when I press the button, the application closes and I get an Android error, that the application was closed unexpectedly (and I should force exit). I read some topics before on closing an application, and I know that it maybe shouldn’t happen like that. But how should I instead prevent the application from continuing to run? How do you close the application correctly when an error occurs?

    Read the article

  • Having trouble plugging jQuery FullCalendar into Squarespace page

    - by ellenchristine
    When I test the fullcalendar widget locally, it works fine. However, when I try to integrate it with Squarespace, it doesn't show up. Here is what is in my head tag: <link rel='stylesheet' type='text/css' href='storage/scripts/fullcalendar.css' /> <script type='text/javascript' src='storage/scripts/jquery.js'></script> <script type='text/javascript' src='storage/scripts/fullcalendar.js'></script> <script type='text/javascript'> $(document).ready(function() { // page is now ready, initialize the calendar... $('#calendar').fullCalendar({ aspectRatio:2 }) }); </script> I have this in my body tag: <div id="calendar" style="width:500px; height:330px; background-color:#CCCCCC;"></div> The div displays but the calendar doesn't appear at all. I can't figure out why this isn't working! The calendar should definitely fit in the div space (at least it does locally). I've used jQuery with Squarespace before, and I don't see where my error is.

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • Some sonatype nexus questions.

    - by smallufo
    I deployed a sonatype nexus server inside my LAN , mapping some remote repositories to my public repositories : First question is , why these repositories not sync with the "real" repositories ? For example , I mapped maven central (http://repo1.maven.org/maven2) to "central" , but when I browse http://smallufo:8081/nexus/content/repositories/central/org/springframework/ , the packages are not complete , in http://repo2.maven.org/maven2/org/springframework/ , there are tons of artifacts , but I only have some of them : And versions are old ... ex : spring-core is only 2.5.6.SEC01 , but the latest version is 3.0.2.RELEASE. And my maven client seems can only find the old artifacts ... "central" is a proxy directory , it should be the same with the remote server. I tried to "Expire Cache" , "ReIndex" , "Incremental ReIndex" the whole "central" : After a long time with almost 100% java process load , the situation seems not better , just add some artifacts ... not reflecting the real "Maven Central" data... Second question , what's difference with "Expire Cache" , "ReIndex" , "Incremental ReIndex" ? Even I can "search" spring-core.3.0.2.RELEASE , my m2eclipse still cannot find it : I can also see the spring-core-3.0.2.RELEASE in the "index" , (but not available in "storage") : But why m2eclipse cannot make use of it ? it seems m2eclipse can only install artifacts in the storage , if this is how nexus works , how do I "force" download spring-core-3.0.2.RELEASE to nexus's storage ? How do I solve these strange incompatibilities ? Thanks a lot !

    Read the article

  • S3 file Uploading from Mac app though PHP?

    - by Ilija Tovilo
    I have asked this question before, but it was deleted due too little information. I'll try to be more concrete this time. I have an Objective-C mac application, which should allow users to upload files to S3-storage. The s3 storage is mine, the users don't have an Amazon account. Until now, the files were uploaded directly to the amazon servers. After thinking some more about it, it wasn't really a great concept, regarding security and flexibility. I want to add a server in between. The user should authenticate with my server, the server would open a session if the authentication was successful, and the file-sharing could begin. Now my question. I want to upload the files to S3. One option would be to make a POST-request and wait until the server would receive the file. Problems here are, that there would be a delay, when the file is being uploaded from my server to the S3 servers, and it would double the uploading time. Best would be, if I could validate the request, and then redirecting it, so the client uploads it directly to the s3-storage. Not sure if this is possible somehow. Uploading directly to S3 doesn't seem to be very smart. After looking into other apps like Droplr and Dropmark, it looks like they don't do this. Btw. I did this using Little Snitch. They have their api on their own web-server, and that's it. Could someone clear things up for me? EDIT How should I transmit my files to S3? Is there a way to "forward" it, or do I have to upload it to my server and then upload it from there to S3? Like I said, other apps can do this efficiently and without the need of communicating with S3 directly.

    Read the article

  • File name containing more than 16 characters inside parentheses failing

    - by Tom anMoney
    I am generating file names that contain a timestamp in the following format: "base_name (yyyy-mm-dd hhmmss).ext" This seems to cause a problem on Android. Here's my log: /storage/sdcard0/anMoney/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf E/Gmail (11802): java.io.FileNotFoundException: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf: open failed: ENOENT (No such file or directory) E/Gmail (11802): at libcore.io.IoBridge.open(IoBridge.java:416) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:78) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:105) E/Gmail (11802): at android.content.ContentResolver.openInputStream(ContentResolver.java:445) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.cacheAttachment(MailEngine.java:3054) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.sendOrSaveDraft(MailEngine.java:2746) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.sendOrSaveDraft(MailProvider.java:477) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.insert(MailProvider.java:534) E/Gmail (11802): at android.content.ContentProvider$Transport.insert(ContentProvider.java:201) E/Gmail (11802): at android.content.ContentResolver.insert(ContentResolver.java:864) E/Gmail (11802): at com.google.android.gm.provider.Gmail$MessageModification.sendOrSaveNewMessage(Gmail.java:3576) E/Gmail (11802): at com.google.android.gm.ComposeActivity$SendOrSaveTask$1.onInitializationComplete(ComposeActivity.java:1765) E/Gmail (11802): at com.google.android.gm.provider.MailEngine$5.run(MailEngine.java:1006) E/Gmail (11802): at android.os.Handler.handleCallback(Handler.java:615) E/Gmail (11802): at android.os.Handler.dispatchMessage(Handler.java:92) E/Gmail (11802): at android.os.Looper.loop(Looper.java:137) E/Gmail (11802): at android.os.HandlerThread.run(HandlerThread.java:60) E/Gmail (11802): Caused by: libcore.io.ErrnoException: open failed: ENOENT (No such file or directory) Now, if I trim the file name to have only 16 characters within the parentheses, everything is working as expected. I am able to send the file as a GMail attachment. The following file name is working fine: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 11070).pdf I tried the following troubleshooting: It's not the overall length of the file name, as if I shorten the base name, the same behavior remains It's not GMail, uploading the file to Google Drive fails similarly 16 characters inside the parentheses work, but not 17 It's not the space character inside the parentheses that causes the issue, as I replaced it with a dash and it's the same problem. Anybody has any ideas on what's going on here?

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Problem with eager load polymorphic associations using Linq and NHibernate

    - by Voislav
    Is it possible to eagerly load polymorphic association using Linq and NH? For example: Company is base class, Department is inherited from Company, and Company has association Employees to the User (one-to-many) and also association to the Country (many-to-one). Here is mapping part related to inherited class (without User and Country classes): <class name="Company" discriminator-value="Company"> <id name="Id" type="int" unsaved-value="0" access="nosetter.camelcase-underscore"> <generator class="native"></generator> </id> <discriminator column="OrganizationUnit" type="string" length="10" not-null="true"/> <property name="Name" type="string" length="50" not-null="true"/> <many-to-one name="Country" class="Country" column="CountryId" not-null ="false" foreign-key="FK_Company_CountryId" access="field.camelcase-underscore" /> <set name="Departments" inverse="true" lazy="true" access="field.camelcase-underscore"> <key column="DepartmentParentId" not-null="false" foreign-key="FK_Department_DepartmentParentId"></key> <one-to-many class="Department"></one-to-many> </set> <set name="Employees" inverse="true" lazy="true" access="field.camelcase-underscore"> <key column="CompanyId" not-null="false" foreign-key="FK_User_CompanyId"></key> <one-to-many class="User"></one-to-many> </set> <subclass name="Department" extends="Company" discriminator-value="Department"> <many-to-one name="DepartmentParent" class="Company" column="DepartmentParentId" not-null ="false" foreign-key="FK_Department_DepartmentParentId" access="field.camelcase-underscore" /> </subclass> </class> I do not have problem to eagerly load any of the association on the Company: Session.Query<Company>().Where(c => c.Name == "Main Company").Fetch(c => c.Country).Single(); Session.Query<Company>().Where(c => c.Name == "Main Company").FetchMany(c => c.Employees).Single(); Also, I could eagerly load not-polymorphic association on the department: Session.Query<Department>().Where(d => d.Name == "Department 1").Fetch(d => d.DepartmentParent).Single(); But I get NullReferenceException when I try to eagerly load any of the polymorphic association (from the Department): Assert.Throws<NullReferenceException>(() => Session.Query<Department>().Where(d => d.Name == "Department 1").Fetch(d => d.Country).Single()); Assert.Throws<NullReferenceException>(() => Session.Query<Department>().Where(d => d.Name == "Department 1").FetchMany(d => d.Employees).Single()); Am I doing something wrong or this is not supported yet?

    Read the article

  • How to make and use first object in Objective C to store sender tag

    - by dbonneville
    I started with a sample program that simply sets up a UIView with some buttons on it. Works great. I want to access the sender tag from each button and save it. My first thought was to save it to a global variable but I couldn't get it to work right. I thought the best way would be to create an object with one property and synthesize it so I can get or set it as needed. First, I created GlobalGem.h: #import <Foundation/Foundation.h> @interface GlobalGem : NSObject { int gemTag; } @property int gemTag; -(void) print; @end Then I created GlobalGem.m: #import "GlobalGem.h" @implementation GlobalGem @synthesize gemTag; -(void) print { NSLog(@"gemTag value is %i", gemTag); } @end The sample code I worked from doesn't do anything I can see in "main" where the program initializes. They just have their methods and are set up to handle IBOutlet actions. I want to use this object in the IBOutlet methods. But I don't know where to create the object. In the viewDidLoad, I tried this: GlobalGem *aGem = [[GlobalGem alloc]init]; [aGem print]; I added reference to the .h file of course. The print method works. In the nslog I get "gemTag value is 0". But now I want to use the instance aGem in some of the IBOutlet actions. For instance, if I put this same method in one of the outlet actions like this: - (IBAction)touchButton:(id)sender { [aGem print]; } ...I get a build error saying that "aGem is undeclared". Yes, I'm new to objective c and am very confused. Is the instance I created, aGem, in the viewDidLoad not accessible outside of that method? How would I create an instance of a Class that I can store a variable value that any of the IBOutlet methods can access? All the buttons need to be able to store their sender tag in the instance aGem and I'm stuck. As I mentioned earlier, I was able to write the sender tag to a global variable I declared in the UIView .h file that came with the program, but I ran into issues using it. Where am I off?

    Read the article

  • Why do I get extra, unexpected results with my ack regex?

    - by Gauthier
    I'm finally learning regexps and training with ack. I believe this uses Perl regexp. I want to match all lines where the first non-blank characters are if (<word> !, with any number of spaces in between the elements. This is what I came up with: ^[ \t]*if *\(\w+ *! It only nearly worked. ^[ \t]* is wrong, since it matches one or none [space or tab]. What I want is to match anything that may contain only space or tab (or nothing). For example these should not match: // if (asdf != 0) else if (asdf != 1) How can I modify my regexp for that? EDIT adding command line ack -i --group -a '^\s*if *\(\w+ *!' c:/work/proj/proj Note the single quotes, I'm not so sure about them anymore. My search base is a larger code base. It does include matching expressions (quite some), but even for example: 274: }else if (y != 0) , which I get as a result of the above command. EDIT adding the result of mobrule's test Mobrule, thanks for providing me a text to test on. I'll copy here what I get on my prompt: C:\Temp\regex>more ack.test # ack.test if (asdf != 0) # no spaces - ok if (asdf != 0) # single space - ok if (asdf != 0) # single tab - ok if (asdf != 0) # multiple space - ok if (asdf != 0) # multiple tab - ok if (asdf != 0) # spaces + tab ok if (asdf != 0) # tab + space ok if (asdf != 0) # space + tab + space ok // if (asdf != 0) # not ok } else if (asdf != 0) # not ok C:\Temp\regex>ack '^[ \t]*if *\(\w+ *!' ack.test C:\Temp\regex>"C:\Program\git\bin\perl.exe" C:\bat\ack.pl '[ \t]*if *\(\w+ *!' a ck.test if (asdf != 0) # no spaces - ok if (asdf != 0) # single space - ok if (asdf != 0) # single tab - ok if (asdf != 0) # multiple space - ok if (asdf != 0) # multiple tab - ok if (asdf != 0) # spaces + tab ok if (asdf != 0) # tab + space ok if (asdf != 0) # space + tab + space ok // if (asdf != 0) # not ok } else if (asdf != 0) # not ok The problem is in my call to my ack.bat! ack.bat contains: "C:\Program\git\bin\perl.exe" C:\bat\ack.pl %* Although I call with a caret, it gets away at the call of the bat file! Escaping the caret with ^^ does not work. Quoting the regex with " " instead of ' ' works. My problem was a DOS/win problem, sorry for bothering you all for that.

    Read the article

  • Qt - Calling widget parent's slots

    - by bullettime
    I wrote a small program to test accessing a widget parent's slot. Basically, it has two classes: Widget: namespace Ui { class Widget; } class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); QLabel *newlabel; QString foo; public slots: void changeLabel(); private: Ui::Widget *ui; }; Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget) { ui->setupUi(this); customWidget *cwidget = new customWidget(); newlabel = new QLabel("text"); foo = "hello world"; this->ui->formLayout->addWidget(newlabel); this->ui->formLayout->addWidget(cwidget); connect(this->ui->pushButton,SIGNAL(clicked()),cwidget,SLOT(callParentSlot())); connect(this->ui->pb,SIGNAL(clicked()),this,SLOT(changeLabel())); } void Widget::changeLabel(){ newlabel->setText(this->foo); } and customWidget: class customWidget : public QWidget { Q_OBJECT public: customWidget(); QPushButton *customPB; public slots: void callParentSlot(); }; customWidget::customWidget() { customPB = new QPushButton("customPB"); QHBoxLayout *hboxl = new QHBoxLayout(); hboxl->addWidget(customPB); this->setLayout(hboxl); connect(this->customPB,SIGNAL(clicked()),this,SLOT(callParentSlot())); } void customWidget::callParentSlot(){ ((Widget*)this->parentWidget())->changeLabel(); } in the main function, I simply created an instance of Widget, and called show() on it. This Widget instance has a label, a QString, an instance of customWidget class, and two buttons (inside the ui class, pushButton and pb). One of the buttons calls a slot in its own class called changeLabel(), that, as the name suggests, changes the label to whatever is set in the QString contained in it. I made that just to check that changeLabel() worked. This button is working fine. The other button calls a slot in the customWidget instance, named callParentSlot(), that in turn tries to call the changeLabel() slot in its parent. Since in this case I know that its parent is in fact an instance of Widget, I cast the return value of parentWidget() to Widget*. This button crashes the program. I made a button within customWidget to try to call customWidget's parent slot as well, but it also crashes the program. I followed what was on this question. What am I missing?

    Read the article

  • How build my own Application Setting

    - by adisembiring
    I want to build a ApplicationSetting for my application. The ApplicationSetting can be stored in a properties file or in a database table. The settings are stored in key-value pairs. E.g. ftp.host = blade ftp.username = dummy ftp.pass = pass content.row_pagination = 20 content.title = How to train your dragon. I have designed it as follows: Application settings reader: interface IApplicationSettingReader { read(); } DatabaseApplicationSettingReader { dao appSettingDao; AppSettings read() { List<AppSettingEntity> listEntity = appSettingsDao.findAll(); Map<String, String> map = new HaspMap<String, String>(); foreach (AppSettingEntity entity : listEntity) { map.put(entity.getConfigName(), entity.getConfigValue()); } return new AppSettings(map); } } DatabaseApplicationSettingReader { dao appSettingDao; AppSettings read() { //read from some properties file return new AppSettings(map); } } Application settings class: AppSettings { private static AppSettings instance; private Map map; Public AppSettings(Map map) { this.map = map; } public static AppSettings getInstance() { if (instance == null) { throw new RuntimeException("Object not configure yet"); } return instance; } public static configure(IApplicationSettingReader reader) { instance = reader.read(); } public String getFtpSetting(String param) { return map.get("ftp." + param); } public String getContentSetting(String param) { return map.get("content." + param); } } Test class: AppSettingsTest { IApplicationSettingReader reader; @Before public void setUp() throws Exception { reader = new DatabaseApplicationSettingReader(); } @Test public void getContentSetting_should_get_content_title() { AppSettings.configure(reader); Instance settings = AppSettings.getInstance(); String title = settings.getContentSetting("title"); assertNotNull(title); Sysout(title); } } My questions are: Can you give your opinion about my code, is there something wrong? I configure my application setting once, while the application start, I configure the application setting with appropriate reader (DbReader or PropertiesReader), I make it singleton. The problem is, when some user edit the database or file directly to database or file, I can't get the changed values. Now, I want to implement something like ApplicationSettingChangeListener. So if the data changes, I will refresh my application settings. Do you have any suggestions how this can be implemented?

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • C++0x Overload on reference, versus sole pass-by-value + std::move?

    - by dean
    It seems the main advice concerning C++0x's rvalues is to add move constructors and move operators to your classes, until compilers default-implement them. But waiting is a losing strategy if you use VC10, because automatic generation probably won't be here until VC10 SP1, or in worst case, VC11. Likely, the wait for this will be measured in years. Here lies my problem. Writing all this duplicate code is not fun. And it's unpleasant to look at. But this is a burden well received, for those classes deemed slow. Not so for the hundreds, if not thousands, of smaller classes. ::sighs:: C++0x was supposed to let me write less code, not more! And then I had a thought. Shared by many, I would guess. Why not just pass everything by value? Won't std::move + copy elision make this nearly optimal? Example 1 - Typical Pre-0x constructor OurClass::OurClass(const SomeClass& obj) : obj(obj) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // single copy OurClass(SomeClass()); // single copy Cons: A wasted copy for rvalues. Example 2 - Recommended C++0x? OurClass::OurClass(const SomeClass& obj) : obj(obj) {} OurClass::OurClass(SomeClass&& obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // zero copies, one move OurClass(SomeClass()); // zero copies, one move Pros: Presumably the fastest. Cons: Lots of code! Example 3 - Pass-by-value + std::move OurClass::OurClass(SomeClass obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy, one move OurClass(std::move(o)); // zero copies, two moves OurClass(SomeClass()); // zero copies, one move Pros: No additional code. Cons: A wasted move in cases 1 & 2. Performance will suffer greatly if SomeClass has no move constructor. What do you think? Is this correct? Is the incurred move a generally acceptable loss when compared to the benefit of code reduction?

    Read the article

  • Hide public method used to help test a .NET assembly

    - by ChrisW
    I have a .NET assembly, to be released. Its release build includes: A public, documented API of methods which people are supposed to use A public but undocumented API of other methods, which exist only in order to help test the assembly, and which people are not supposed to use The assembly to be released is a custom control, not an application. To regression-test it, I run it in a testing framework/application, which uses (in addition to the public/documented API) some advanced/undocumented methods which are exported from the control. For the public methods which I don't want people to use, I excluded them from the documentation using the <exclude> tag (supported by the Sandcastle Help File Builder), and the [EditorBrowsable] attribute, for example like this: /// <summary> /// Gets a <see cref="IEditorTransaction"/> instance, which helps /// to combine several DOM edits into a single transaction, which /// can be undone and redone as if they were a single, atomic operation. /// </summary> /// <returns>A <see cref="IEditorTransaction"/> instance.</returns> IEditorTransaction createEditorTransaction(); /// <exclude/> [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); This successfully removes the method from the API documentation, and from Intellisense. However, if in a sample application program I right-click on an instance of the interface to see its definition in the metadata, I can still see the method, and the [EditorBrowsable] attribute as well, for example: // Summary: // Gets a ModelText.ModelDom.Nodes.IEditorTransaction instance, which helps // to combine several DOM edits into a single transaction, which can be undone // and redone as if they were a single, atomic operation. // // Returns: // A ModelText.ModelDom.Nodes.IEditorTransaction instance. IEditorTransaction createEditorTransaction(); // [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); Questions: Is there a way to hide a public method, even from the meta data? If not then instead, for this scenario, would you recommend making the methods internal and using the InternalsVisibleTo attribute? Or would you recommend some other way, and if so what and why? Thank you.

    Read the article

  • Problems with creating and using of delegate-protocols

    - by Flocked
    Hello, I have the problem that I have created a delegate protocol, but the necessary methods are not executed, although I have implemented the protocol in my header file. Here are the detailed explanation: I created an instance of my ViewController (TimeLineViewController), which will be displayed. This ViewController contains a UITableView, which in turn receives the individual Cells / Rows from one instance of my TableViewCell. So the ViewController creates an instance of TableCellView. The TableViewCell contains a UITextView, which contains web links. Now I want, that not safari opens the links, but my own built-in browser. Unfortunately TableViewCell can not open a new ViewController with a WebView, so I decided to create a delegate protocol. The whole thing looks like this: WebViewTableCellDelegate.h: @protocol WebViewTableCellDelegate -(void)loadWeb; @end Then I created a instance WebViewDelegate in the TableViewCell: id <WebViewTableCellDelegate> _delegate; In the .m of the TableViewCell: @interface UITextView (Override) @end @class WebView, WebFrame; @protocol WebPolicyDecisionListener; @implementation UITextView (Override) - (void)webView:(WebView *)webView decidePolicyForNavigationAction:(NSDictionary *)actionInformation request:(NSURLRequest *)request frame:(WebFrame *)frame decisionListener:(id < WebPolicyDecisionListener >)listener { NSLog(@"request: %@", request); [_delegate loadWeb]; } @end - (void)setDelegate:(id <WebViewTableCellDelegate>)delegate{ _delegate = delegate;} And in my TimeLineViewController I implemented the protocol with < and the loadWeb-metode: - (void)loadWeb{ WebViewController *web = [[WebViewController alloc] initWithNibName:nil bundle:nil]; web.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController: web animated:YES]; [web release]; } And when the instance of the TableViewCell will be created in the TimelineViewController: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"MyIdentifier"; MyIdentifier = @"tableCell"; TableViewCell *cell = (TableViewCell *)[tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if(cell == nil) { [[NSBundle mainBundle] loadNibNamed:@"TableViewCell" owner:self options:nil]; cell = tableCell;} [cell setDelegate:self]; //… } It is the first time I created a own delegate-protocol, so maybe there are stupid mistakes. Also I´m learnung Objective-C and programming generally only for 4 weeks. Thanks for your help! EDIT: I think i found the problem, but I dont know how to resolve it. I try to use [_delegate loadWeb]; in the subclass of the UITextView (because that is the only way i can react on the weblinks) and the subclass can´t use [_delegate loadWeb];. I tried this in a other methode and it worked.

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • Why do I get a WCF timeout even though my service call and callback are successful?

    - by KallDrexx
    I'm playing around with hooking up an in-game console to a WCF interface, so an external application can send console commands and receive console output. To accomplish this I created the following service contracts: public interface IConsoleNetworkCallbacks { [OperationContract(IsOneWay = true)] void NewOutput(IEnumerable<string> text, string category); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IConsoleNetworkCallbacks))] public interface IConsoleInterface { [OperationContract] void ProcessInput(string input); [OperationContract] void ChangeCategory(string category); } On the server I implemented it with: public class ConsoleNetworkInterface : IConsoleInterface, IDisposable { public ConsoleNetworkInterface() { ConsoleManager.Instance.RegisterOutputUpdateHandler(OutputHandler); } public void Dispose() { ConsoleManager.Instance.UnregisterOutputHandler(OutputHandler); } public void ProcessInput(string input) { ConsoleManager.Instance.ProcessInput(input); } public void ChangeCategory(string category) { ConsoleManager.Instance.UnregisterOutputHandler(OutputHandler); ConsoleManager.Instance.RegisterOutputUpdateHandler(OutputHandler, category); } protected void OutputHandler(IEnumerable<string> text, string category) { var callbacks = OperationContext.Current.GetCallbackChannel<IConsoleNetworkCallbacks>(); callbacks.NewOutput(text, category); } } On the client I implemented the callback with: public class Callbacks : IConsoleNetworkCallbacks { public void NewOutput(IEnumerable<string> text, string category) { MessageBox.Show(string.Format("{0} lines received for '{1}' category", text.Count(), category)); } } Finally, I establish the service host with the following class: public class ConsoleServiceHost : IDisposable { protected ServiceHost _host; public ConsoleServiceHost() { _host = new ServiceHost(typeof(ConsoleNetworkInterface), new Uri[] { new Uri("net.pipe://localhost") }); _host.AddServiceEndpoint(typeof(IConsoleInterface), new NetNamedPipeBinding(), "FrbConsolePipe"); _host.Open(); } public void Dispose() { _host.Close(); } } and use the following code on my client to establish the connection: protected Callbacks _callbacks; protected IConsoleInterface _proxy; protected void ConnectToConsoleServer() { _callbacks = new Callbacks(); var factory = new DuplexChannelFactory<IConsoleInterface>(_callbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/FrbConsolePipe")); _proxy = factory.CreateChannel(); _proxy.ProcessInput("Connected"); } So what happens is that my ConnectToConsoleServer() is called and then it gets all the way to _proxy.ProcessInput("Connected");. In my game (on the server) I immediately see the output caused by the ProcessInput call, but the client is still stalled on the _proxy.ProcessInput() call. After a minute my client gets a JIT TimeoutException however at the same time my MessageBox message appears. So obviously not only is my command being sent immediately, my callback is being correctly called. So why am I getting a timeout exception? Note: Even removing the MessageBox call, I still have this issue, so it's not an issue of the GUI blocking the callback response.

    Read the article

  • Marionette js itemview not defined: then on browser refresh it is defined and all works well - race condition?

    - by Robert
    Yeah it's just the initial browser load or two after a cache clear. Subsequent refreshes clear the problem up. I'm thinking the item views just aren't fully constructed in time to be used in the collection views on the first load. But then they are on a refresh? Don't know. There must be something about the code sequence or loading or the load time itself. Not sure. I'm loading via require.js. Have two collections - users and messages. Each renders in its own list view. Each works, just not the first time or two the browser loads. The first time you load after clearing browser cache the console reports, for instance: "Uncaught ReferenceError: MessageItemView is not defined" A simple browser refresh clears it up. Same goes for the user collection. It's collection view says it doesn't know anything about its item view. But a simple browser refresh and all is well. My views (item and collection) are in separate files. Is that the problem? For instance, here is my message collection view in its own file: messagelistview.js var MessageListView = Marionette.CollectionView.extend({ itemView: MessageItemView, el: $("#messages") }); And the message item view is in a separate file: messageview.js var MessageItemView = Marionette.ItemView.extend({ tagName: "div", template: Handlebars.compile( '<div>{{fromUserName}}:</div>' + '<div>{{message}}</div>' + ) }); Then in my main module file, which references each of those files, the collection view is constructed and displayed: main.js //Define a model MessageModel = Backbone.Model.extend(); //Make an instance of MessageItemView - code in separate file, messagelistview.js MessageView = new MessageItemView(); //Define a message collection var MessageCollection = Backbone.Collection.extend({ model: MessageModel }); //Make an instance of MessageCollection var collMessages = new MessageCollection(); //Make an instance of a MessageListView - code in separate file, messagelistview.js var messageListView = new MessageListView({ collection: collMessages }); App.messageListRegion.show(messageListView); Do I just have things sequenced wrong? I'm thinking it's some kind of race condition only because over 3G to an iPad the item views are always undefined. They never seem to get constructed in time. PC on a hard wired connection does see success after a browser refresh or two.

    Read the article

  • Crossthread exception and invokerequired solution doesn't change my control value

    - by Pilouk
    EDIT Solution : Here i'm setting my byref value in each object then i'm running a backgroundworker Private Sub TelechargeFichier() Dim DocManquant As Boolean = False Dim docName As String = "" Dim lg As String = "" Dim telechargementFini As Boolean = False lblMessage.Text = EasyDealChangeLanguage.Instance.GetStringFromResourceName("1478") prgBar.Maximum = m_listeFichiers.Count For i As Integer = 0 To m_listeFichiers.Count - 1 m_listeFichiers(i).Set_ByRefLabel(lblMessage) m_listeFichiers(i).Set_ByRefPrgbar(prgBar) m_listeThreads.Add(New Thread(AddressOf m_listeFichiers(i).DownloadMe)) Next m_bgWorker = New BackgroundWorker m_bgWorker.WorkerReportsProgress = True AddHandler m_bgWorker.DoWork, AddressOf DownloadFiles m_bgWorker.RunWorkerAsync() ''Completed 'lblMessage.Text = EasyDealChangeLanguage.Instance.GetStringFromResourceName("1383") 'Me.DialogResult = System.Windows.Forms.DialogResult.OK End Sub Here is my downloadFiles function : Note that each start will do the downloadMe function see below too Private Sub DownloadFiles(sender As Object, e As DoWorkEventArgs) For i As Integer = 0 To m_listeThreads.Count - 1 m_listeThreads(i).Start() Next For i As Integer = 0 To m_listeThreads.Count - 1 m_listeThreads(i).Join() Next End Sub I have multiple thread that each will download a ftp file. I would like that each file that have been completed will set a value to a progress bar and a label from my UI thread. For some reason invokerequired never change to false. Here is my little function that start all the thread Private Sub TelechargeFichier() Dim DocManquant As Boolean = False Dim docName As String = "" Dim lg As String = "" Dim telechargementFini As Boolean = False lblMessage.Text = EasyDealChangeLanguage.Instance.GetStringFromResourceName("1478") prgBar.Maximum = m_listeFichiers.Count For i As Integer = 0 To m_listeFichiers.Count - 1 m_listeFichiers(i).Set_ByRefLabel(lblMessage) m_listeFichiers(i).Set_ByRefPrgbar(prgBar) m_listeThreads.Add(New Thread(AddressOf m_listeFichiers(i).DownloadMe)) Next For i As Integer = 0 To m_listeThreads.Count - 1 m_listeThreads(i).Start() Next For i As Integer = 0 To m_listeThreads.Count - 1 m_listeThreads(i).Join() Next 'Completed lblMessage.Text = EasyDealChangeLanguage.Instance.GetStringFromResourceName("1383") Me.DialogResult = System.Windows.Forms.DialogResult.OK End Sub Here my property that hold the Byref control from the UI thread. This is in my object which content the addressof function that will download the file (DownloadMe) Public Sub Set_ByRefPrgbar(ByRef prgbar As ProgressBar) m_prgBar = prgbar End Sub Public Sub Set_ByRefLabel(ByRef lbl As EasyDeal.Controls.EasyDealLabel3D) m_lblMessage = lbl End Sub Here is the download function : Public Sub DownloadMe() Dim ftpReq As FtpWebRequest Dim ftpResp As FtpWebResponse = Nothing Dim streamInput As Stream Dim fileStreamOutput As FileStream Try ftpReq = CType(WebRequest.Create(EasyDeal.Controls.Common.FTP_CONNECTION & m_downloadFtpPath & m_filename), FtpWebRequest) ftpReq.Credentials = New NetworkCredential(FTP_USER, FTP_PASS) ftpReq.Method = WebRequestMethods.Ftp.DownloadFile ftpResp = ftpReq.GetResponse streamInput = ftpResp.GetResponseStream() fileStreamOutput = New FileStream(m_outputPath, FileMode.Create, FileAccess.ReadWrite) ReadWriteStream(streamInput, fileStreamOutput) Catch ex As Exception 'Au pire la fichier sera pas downloader Finally If ftpResp IsNot Nothing Then ftpResp.Close() End If Dim nomFichier As String = m_displaynameEN If EasyDealChangeLanguage.GetCurrentLanguageTypes = EasyDealChangeLanguage.EnumLanguageType.Francais Then nomFichier = m_displaynameFR End If If m_lblMessage IsNot Nothing Then EasyDealCommon.TH_SetControlText(m_lblMessage, String.Format(EasyDealChangeLanguage.Instance.GetStringFromResourceName("1479"), nomFichier)) End If If m_prgBar IsNot Nothing Then EasyDealCommon.TH_SetPrgValue(m_prgBar, 1) End If End Try End Sub Here is the crossthread invoke solution function : Public Sub TH_SetControlText(ByVal ctl As Control, ByVal text As String) If ctl.InvokeRequired Then ctl.BeginInvoke(New Action(Of Control, String)(AddressOf TH_SetControlText), ctl, text) Else ctl.Text = text End If End Sub Public Sub TH_SetPrgValue(ByVal prg As ProgressBar, ByVal value As Integer) If prg.InvokeRequired Then prg.BeginInvoke(New Action(Of ProgressBar, Integer)(AddressOf TH_SetPrgValue), prg, value) Else prg.Value += value End If End Sub The problem is the invokerequired never get to false it actually goes in to beginInvoke but never end in the Else section to set the value.

    Read the article

  • C# Different class objects in one list

    - by jeah_wicer
    I have looked around some now to find a solution to this problem. I found several ways that could solve it but to be honest I didn't realize which of the ways that would be considered the "right" C# or OOP way of solving it. My goal is not only to solve the problems but also to develop a good set of code standards and I'm fairly sure there's a standard way to handle this problem. Let's say I have 2 types of printer hardwares with their respective classes and ways of communicating: PrinterType1, PrinterType2. I would also like to be able to later on add another type if neccessary. One step up in abstraction those have much in common. It should be possible to send a string to each one of them as an example. They both have variables in common and variables unique to each class. (One for instance communicates via COM-port and has such an object, while the other one communicates via TCP and has such an object). I would however like to just implement a List of all those printers and be able to go through the list and perform things as "Send(string message)" on all Printers regardless of type. I would also like to access variables like "PrinterList[0].Name" that are the same for both objects, however I would also at some places like to access data that is specific to the object itself (For instance in the settings window of the application where the COM-port name is set for one object and the IP/port number for another). So, in short something like: In common: Name Send() Specific to PrinterType1: Port Specific to PrinterType2: IP And I wish to, for instance, do Send() on all objects regardless of type and the number of objects present. I've read about polymorphism, Generics, interfaces and such, but I would like to know how this, in my eyes basic, problem typically would be dealt with in C# (and/or OOP in general). I actually did try to make a base class, but it didn't quite seem right to me. For instance I have no use of a "string Send(string Message)" function in the base class itself. So why would I define one there that needs to be overridden in the derived classes when I would never use the function in the base class ever in the first place? I'm really thankful for any answers. People around here seem very knowledgeable and this place has provided me with many solutions earlier. Now I finally have an account to answer and vote with too. EDIT: To additionally explain, I would also like to be able to access the objects of the actual printertype. For instance the Port variable in PrinterType1 which is a SerialPort object. I would like to access it like: PrinterList[0].Port.Open() and have access to the full range of functionality of the underlaying port. At the same time I would like to call generic functions that work in the same way for the different objects (but with different implementations): foreach (printer in Printers) printer.Send(message)

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >