Search Results

Search found 38248 results on 1530 pages for 'project folder'.

Page 122/1530 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • How can I get a project type on Netbeans Platform?

    - by Fabio
    Hi folks, Is there a way to know the type of a selected project? I would like to do some specific actions depending of the project type like a J2SE project. Below is the only way that I found to do that: public final class MyAction extends CookieAction { @Override public boolean isEnabled() { if(this.getActivatedNodes() == null || this.getActivatedNodes().length != 1) { return false; } Lookup lookup = this.getActivatedNodes()[0].getLookup(); // gets the selected project Project currentProject = lookup.lookup(Project.class); // checks if the selected project is a J2SE Project or a Maven Project if(currentProject != null && (currentProject.getClass().getSimpleName().equals("J2SEProject") || currentProject.getClass().getSimpleName().equals("NbMavenProjectImpl"))) { return true; } return false; }}

    Read the article

  • How to copy referenced assembly's dependecies to ASP.NET output bin folder?

    - by LD2008
    Hi all, In Visual Studio 2010, I have project A (asp.net application). Project A references project B (class library). Project B references assembly C (direct reference to a DLL). When building project A, only project A and project B binaries are present in the /bin directory of project A, but not the assembly C. Why is that? If project B depends on assembly C, why is assembly C not copied together to the output folder? "Copy local" is already set to "true" for assembly C. Any information would be appreciated. Thanks!

    Read the article

  • Parsing xml files locally from assets folder using XmlPullParser

    - by Randolphg
    Im trying to parse a local xml file that I place in my assets folder. I've been trying to do this for almost a week now. Here is my test xml file Test1 Test2 Test3 Test4 Test5 I keep getting the same error: W/System.err(22458): org.xmlpull.v1.XmlPullParserException: unexpected type (position:TEXT Code: public void xmlParser() throws XmlPullParserException, IOException, ParserConfigurationException, SAXException { Log.d("tag", "xmlParsing...."); Arithmetic arthm = new Arithmetic(); XmlPullParserFactory xmlPF = XmlPullParserFactory.newInstance(); xmlPF.setValidating(false); XmlPullParser xml = xmlPF.newPullParser(); InputStream raw = getApplication().getAssets().open("menu.xml"); xml.setInput(raw, null); xml.nextTag(); Log.d("tag", "start parsing...."); String elementText = null; String elemName = null; int nofTags = 0; while (xml.getEventType() != XmlPullParser.END_DOCUMENT) { Log.d("tag", "while(xml.next)..."); switch (xml.getEventType()) { case XmlPullParser.START_DOCUMENT: Log.d("tag", "while (xml.getEventType() != XmlPullParser.END_DOCUMENT)"); break; case XmlPullParser.START_TAG: Log.d("tag", " case XmlPullParser.START_TAG"); elementText = xml.getName(); Log.d("tag", "elementText = " + elementText); if (xml.getEventType() != XmlPullParser.END_TAG) { xml.nextTag(); } break; case XmlPullParser.TEXT: Log.d("tag", "case TEXT"); if (elementText.equals("menu") && xml.isWhitespace()) { Log.d("tag", "<" + elementText + ">"); arthm.menu_name = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("item")) { arthm.description = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY NAME")) { arthm.subcategoryDesc.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY DESC")) { arthm.subcategoryName.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } break; case XmlPullParser.END_TAG: Log.d("tag", "case END_TAG"); nofTags += 1; String tags = Integer.toString(nofTags); Log.d("tags", elementText + " number of tags" + tags); if (xml.nextTag() != XmlPullParser.START_TAG) { xml.next(); } break; case XmlPullParser.END_DOCUMENT: Log.d("tag", "case END_DOCUMENT"); break; default: break; } } Log.d("tag", "Success!"); } Thanks in advance.

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • Is my large Windows folder slowing down my machine?

    - by Moses
    I have a problem with my Windows installation running very slow and my Windows folder being too large. I thought that the problems are related. My Windows folder is 17.4 GB I have 1807 folders totalling 2.4 GB that are prefaced with a $. My System32 folder is 1.55 GB My Microsoft.NET folder is 654 MB – I don't know what if any programs I have that are using it. My Service Pack folder is 568 MB. The Software Distribution folder is 536 MB The ie8updates folder is 380 MB. How can I reduce the size of these folders and could their size be why I am running do slow?

    Read the article

  • create new inbox folder and save emails

    - by kasunmit
    i am trying http://www.c-sharpcorner.com/uploadfile/rambab/outlookintegration10282006032802am/outlookintegration.aspx[^] this code for create inbox personal folder and save same mails at the datagrid view (outlook 2007 and vsto 2008) i am able to create inbox folder according to above example but couldn't wire code for save e-mails at that example to save contect they r using following code if (chkVerify.Checked) { OutLook._Application outlookObj = new OutLook.Application(); MyContact cntact = new MyContact(); cntact.CustomProperty = txtProp1.Text.Trim().ToString(); //CREATING CONTACT ITEM OBJECT AND FINDING THE CONTACT ITEM OutLook.ContactItem newContact = (OutLook.ContactItem)FindContactItem(cntact, CustomFolder); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . if (newContact != null) { newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if(string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } else { //IF THE CONTACT DOES NOT EXIST WITH SAME CUSTOM PROPERTY CREATES THE CONTACT. newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { OutLook._Application outlookObj = new OutLook.Application(); OutLook.ContactItem newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { //CREATES THE OUTLOOK CONTACT IN DEFAULT CONTACTS FOLDER. OutLook._Application outlookObj = new OutLook.Application(); OutLook.MAPIFolder fldContacts = (OutLook.MAPIFolder)outlookObj.Session.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderContacts); OutLook.ContactItem newContact = (OutLook.ContactItem)fldContacts.Items.Add(OutLook.OlItemType.olContactItem); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); newContact.Save(); this.Close(); } } /// /// ENABLING AND DISABLING THE CUSTOM FOLDER AND PROPERY OPTIONS. /// /// /// private void rdoCustom_CheckedChanged(object sender, EventArgs e) { if (rdoCustom.Checked) { txFolder.Enabled = true; chkAdd.Enabled = true; chkVerify.Enabled = true; txtProp1.Enabled = true; } else { txFolder.Enabled = false; chkAdd.Enabled = false; chkVerify.Enabled = false; txtProp1.Enabled = false; } } i don t have idea to convert it to save e-mails in the datagrid view the data gride view i am mentioning here is containing details (sender address, subject etc.) of unread mails and the i i am did was perform some filter for that mails as follows string senderMailAddress = txtMailAddress.Text.ToLower(); List list = (List)dgvUnreadMails.DataSource; List myUnreadMailList; List filteredList = (List)(from ci in list where ci.SenderAddress.StartsWith(senderMailAddress) select ci).ToList(); dgvUnreadMails.DataSource = filteredList; it was done successfully then i need to save those filtered e-mails to that personal inbox folder i created already for that pls give me some help my issue is that how can i assign outlook object just like they assign it to contacts (name, address, e-mail etc.) because in the e-mails we couldn't find it ..

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • Email sent from Centos end up in user spam folder

    - by oObe
    I am facing this issue, I use the default postfix MTA in centos but the mail end up in user spam folder, but this does not seem to be a problem in Debian using exim4, both host have hostname and domain name configured, and relay mail through external smtp host. Both configuration and recieving email header are attached. The different seems that Debian has this additional (envelope tag) and (from) tag other than some minor syntax differences. Any help to resolve is appreciated. The IP address and DNS is masked as follow: 1.2.3.4 = My IP address smtp.host.com = external smtp host for my company [email protected] = account at smtp host centos.abc.com = Local centos server debian.abc.com = Local debian server Thanks. Centos main.cf config with the following params configured myhostname = centos.abc.com mydomain = abc.com myorigin = centos.abc.com relayhost = smtp.host.com Centos - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Date: Fri, 28 Sep 2012 13:36:39 +0800 To: [email protected] Subject: Test mail from centos User-Agent: Heirloom mailx 12.4 7/29/08 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <[email protected]> From: [email protected] (root) X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://i.imgur.com/7WAYX.jpg Debain exim4 config .... # This is a Debian specific file dc_eximconfig_configtype='smarthost' dc_other_hostnames='debian.abc.com' dc_local_interfaces='127.0.0.1 ; ::1' dc_readhost='debian.abc.com' dc_relay_domains='smtp.host.com' dc_minimaldns='false' dc_relay_nets='127.0.0.1' dc_smarthost='smtp.host.com' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' debian - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Date: Thu, 27 Sep 2012 15:01:55 +0800 Message-Id: <[email protected]> To: [email protected] Subject: Test from debian From: root <[email protected]> X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://imgur.com/nMsMA.jpg

    Read the article

  • Blend "Window is not supported in a WPF Project"

    - by Andy Dent
    I am having a frustrating time with Blend reporting "Window is not supported in a Windows Presentation Foundation (WPF) project." due to unbuildable configurations but can't quite work out how to mangle my way out of it. I've worked out it is probably due to my trying to have a single solution with x86 and x64 configurations. There is no way to tell Blend 2 which is the active Solution Configuration and active Solution Platform. I think it's a bit of a weakness in the configuration system, or maybe the way I've set things up, but I have Debug64 and Debug solution configurations one of each is used with the platform x86 and x64. I also think it's a simple sorting problem - x64 comes before x86 and Debug comes before Debug64 so Blend ends up with an unbuildable config of Debug with x64. When I choose the combination of Debug and x64 in VS, its XAML editor can't load either. The solution is a moderately complex one - there's a pure Win32 DLL, C++/CLI Model project and two other WPF assemblies used by the main WPF project. UPDATE I have ripped all the x64 config out of my solution and rebuilt everything with no effect. I then uninstalled Blend 2 and installed Blend 3 - it doesn't like things either. The Visual Studio XAML editor is still very happy as is the program building and running. (echoes of strangled scream of frustration from oz)

    Read the article

  • WiX 3: Using heat.exe to add bulk files to a new WiX project: HEAT5150

    - by Karen Kwong
    If this is a repeat question, please direct me to the existing solution. I wasn't able to find a matching query. We currently use InstallShield. I'm attempting to covert a project with 407 files to a WiX3 installation package. I tried using heat.exe to do some of the automation but I get the following warning for almost every file: c: heat dir "c:\projectDir\projectA" -gg -ke -template:Product -out "c:\install\projectA\heatOutput" heat.exe: warning HEAT5150 : Could not harvest data from a file that was expected to be a SelfReg DLL: c:\projectDir\projectA\plugin1.dll. If this file does not support SelfReg you can ignore this warning. Otherwise, this error detail may be helpful to diagnose the failure: Unable to load file: c:\projectDir\projectA\plugin1.dll, error: 126. Q: Is it normal for this warning to be reported for every file? If there's a current "How To create/convert to your first WiX install project with many files" tutorial, please point me to it. The key requirement is "with many files". Thank-you -Karen Kwong- PS. I know that WiX is designed for incremental install project creation but it would be nice to know if there's an automated way to convert existing install projects.

    Read the article

  • Android - Images from Assets folder in a GridView

    - by Saran
    Hi, I have been working on creating a Grid View of images, with images being present in the Assets folder. http://stackoverflow.com/questions/1933015/opening-an-image-file-inside-the-assets-folder link helped me with using the bitmap to read it. The code am currently having is: public View getView(final int position, View convertView, ViewGroup parent) { try { AssetManager am = mContext.getAssets(); String list[] = am.list(""); int count_files = imagelist.length; for(int i= 0;i<=count_files; i++) { BufferedInputStream buf = new BufferedInputStream(am.open(list[i])); Bitmap bitmap = BitmapFactory.decodeStream(buf); imageView.setImageBitmap(bitmap); buf.close(); } } catch (IOException e) { e.printStackTrace(); } } My application does read the image from the Assets folder, but it is not iterating through the cells in the grid view. All the cells of the grid view have a same image picked from the set of images. Can anyone tell me how to iterate through the cells and still have different images ? I have the above code in an ImageAdapter Class which extends the BaseAdapter class, and in my main class I am linking that with my gridview by: GridView gv =(GridView)findViewById(R.id.gridview); gv.setAdapter(new ImageAdapter(this, assetlist)); Thanks a lot for any help in advance, Saran

    Read the article

  • IOS: How to uplaod a file to specific google drive folder using google drive sdk library

    - by loganathan
    I integrated google drive sdk with my ios app. But i do not know how to upload a file to google drive specific folder. Here the code am using to upload the file. But this one uploading the file to my google drive root folder. Any one share a code to upload a file to google drive specific folder?. My Code: -(void)uploadFileToGoogleDrive:(NSString*)fileName { GTLDriveFile *driveFile = [[[GTLDriveFile alloc]init] autorelease]; driveFile.mimeType = @"application/pdf"; driveFile.originalFilename = @"test.doc"; driveFile.title = @"test.doc"; NSString *filePath = [LocalFilesDetails getUserDocumentFullPathForFileName:fileName isSignedDocument:YES]; GTLUploadParameters *uploadParameters = [GTLUploadParameters uploadParametersWithData:[NSData dataWithContentsOfFile:filePath] MIMEType:@"application/pdf"]; GTLQueryDrive *query = [GTLQueryDrive queryForFilesInsertWithObject:driveFile uploadParameters:uploadParameters]; [self.driveService executeQuery:query completionHandler:^(GTLServiceTicket *ticket, GTLDriveFile *updatedFile, NSError *error) { if (error == nil) { NSLog(@"\n\nfile uploaded into google drive\\<my_folder> foler"); } else { NSLog(@"\n\nfile uplod failed google drive\\<my_folder> foler"); } }]; }

    Read the article

  • Returning Meaningful Exceptions from a WCF Project

    - by MissingLinq
    I am pretty new to WCF in general. What little experience I have comes from using fairly simple .SVC services in ASP.NET web applications. I tried experimenting with using a WCF Project for the first time, and ran into a major show-stopper. As I’m sure most are aware, for some strange reason, in a web application in which the customErrors mode is set to On , services (both .ASMX and .SVC) will not return exception details to the client. Instead, the exception and stack trace are emptied, and the message always reads “There was an error processing the request”, which is not at all helpful. When services are directly hosted inside the web application itself, it’s easy to work around this restriction by placing the services in a dedicated folder, and setting for that folder. However, I’m running into this same issue with exceptions not being returned from services that live in a separate WCF project. Thing is, I don’t know how to work around that. In a nutshell: I need to get my WCF Project services to bubble REAL exceptions to the client – or at least, the original exception message, instead of “There was an error processing the request”.

    Read the article

  • maven generate eclipse project for custom packaging

    - by Riduidel
    Hi,for a project I'm working on, I've defined a custom maven packaging, and the associated lifecycle (through the definition of a components.xml and the definition of a LifecycleMapping). Obviously, this packaging corresponds to a specific development, for which a plugin has been created in Eclipse. What I would like to do is configure Eclipse according to my pom.xml content. I've obviously looked at Customizable build lifecycle, but I'm more than confused by provided information. From what I understand, I must define in my target project a build plugin, in which i'll add configuration elements specific to my project. As an example, having a configurator called mycompany.mydev.MyEclipseConfigurator, I'll have to write <build> <plugins> <plugin> <groupId>org.maven.ide.eclipse</groupId> <artifactId>lifecycle-mapping</artifactId> <version>0.9.9-SNAPSHOT</version> <configuration> <mappingId>customizable</mappingId> <configurators> <configurator id='mycompany.mydev.MyEclipseConfigurator'/> </configurators> </configuration> </plugin> </plugins> </build> Am I right ?

    Read the article

  • Installed VS Express 2010 with .NET 4.0 and now .NET 3.5 setup project adds 15 dependencies

    - by Heckflosse_230
    Hi, I installed VS Express 2010 with .NET 4.0 and now a .NET 3.5 setup project in VS 2008 adds 15 dependencies (below), what is going on??? I did not change anything in the project in between installing VS 2010, VS 2008 is packagin the following files in the project: ==================== Packaging file 'Microsoft.Transactions.Bridge.dll'... Packaging file 'System.Core.dll'... Packaging file 'System.Data.DataSetExtensions.dll'... Packaging file 'System.Data.Entity.dll'... Packaging file 'System.Data.Linq.dll'... Packaging file 'System.Data.Services.Client.dll'... Packaging file 'System.Data.Services.Design.dll'... Packaging file 'System.IdentityModel.Selectors.dll'... Packaging file 'System.IdentityModel.dll'... Packaging file 'System.Runtime.Serialization.dll'... Packaging file 'System.ServiceModel.Web.dll'... Packaging file 'System.ServiceModel.dll'... Packaging file 'System.Web.Abstractions.dll'... Packaging file 'System.Web.Extensions.dll'... Packaging file 'System.Xml.Linq.dll'... ==================== I've uninstalled VS 2010 and .NET 4.0 but to no avail, same problem. Lesson learned: DON'T EXPERIMENT ON DEVELOPMENT MACHINE! Thanks, Chris

    Read the article

  • JBoss Seam project can not be run/deployed

    - by user1494328
    I created sample application in Seam framework (Seam Web Project) and JBoss Server 7.1. When I try run application, console dislays: 23:29:35,419 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,452 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015877: Stopped deployment secoundProject-ds.xml in 1ms 23:29:35,455 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015863: Replacement of deployment "secoundProject-ds.xml" by deployment "secoundProject-ds.xml" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE: Failed to process phase PARSE of deployment \"secoundProject-ds.xml\""}} 23:29:35,457 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "secoundProject-ds.xml" 23:29:35,920 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,952 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" My secoundProject-ds.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE datasources PUBLIC "-//JBoss//DTD JBOSS JCA Config 1.5//EN" "http://www.jboss.org/j2ee/dtd/jboss-ds_1_5.dtd"> <datasources> <local-tx-datasource> <jndi-name>secoundProjectDatasource</jndi-name> <use-java-context>true</use-java-context> <connection-url>jdbc:mysql://localhost:3306/database</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password></password> </local-tx-datasource> </datasources> When I comment tags errors disappear, but application is disabled in browser (The requested resource (/secoundProject/) is not available.). What should I do to fix this problem?

    Read the article

  • Why does rsync spawn multiple processes for me?

    - by Yoga
    I am using the following cron statement to backup from one folder to another folder in the same machine: 19 21 * * * root rsync -ac --delete /source/folder /dest/folder When I use pstree, I see the cron forked three processes +-cron---cron---rsync---rsync---rsync And ps 9972 ? Ds 1:00 rsync -ac --delete /source/folder /dest/folder 9973 ? S 0:29 rsync -ac --delete /source/folder /dest/folder 9974 ? S 0:09 rsync -ac --delete /source/folder /dest/folder Why are three processes? Can I limit to only one?

    Read the article

  • VSFTPD Unable to set write permissions on folder

    - by Frank Astin
    I've just set up my first FTP server with VSFTPD on cent os . I can connect to it fine using a user in the group ftp-users but I get read only access . I've tried several different CHMOD codes on the folder (even 777) all to no avail . This is the tutorial I used to set up the server http://tinyurl.com/73pyuxz hopefully you'll be able to see something I missed. Thanks in advance . Requested Config File : # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Visual studio 2010 setup project problem.

    - by Guru
    Hi there, I've made an application that uses .NET framework 3.5 SP1 and SQL Server 2008 Express. Application is fine and now i'm going to to make a setup project for this. When I first build my setup it was fine as all the prerequisites were not included in setup. But I want my setup to install .NET 3.5 SP1 and SQL SERVER 2008 Express also. So for this I've changed the options in setup project's properties from "Download prerequisites from following location" to "Download prerequisites from the same location as my application". In addition to that I've also checked the options above like .NET 3.5 SP1 and SQL Server 2008 Express etc. After doing all this I build my project again. This time I'm Getting 57 Errors. Error 1 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 2 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 3 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 4 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup As the question will become too large so I'm just pasting 3 errors but there are totally 57 errors. Please help me . Thanks in advance Guru

    Read the article

  • Asp.net Webdeployment Project override applicationSettings

    - by citronas
    I got a web deyploment project for a web application project in vs 2008. While building the web deployment project, I want to replace properties in the web.config. My settings are autogenrated by the deisgner. <applicationSettings> <NAMESPACE.Properties.Settings> <setting name="Testenvironment" serializeAs="String"> <value>True</value> </setting> </NAMESPACE.Properties.Settings> </applicationSettings> In the config file which contains the settings for the specific server looks like the following: <?xml version="1.0"?> <applicationSettings> <NAMESPACE.Properties.Settings> <setting name="Testenvironment" serializeAs="String"> <value>False</value> </setting> </NAMESPACE.Properties.Settings> </applicationSettings> Sadly, this does not work. I get an error "The format of a configSource file must be an element containing the name of the section" that highlights the second line (2nd example code). How must the Tag be named in order to make evertything work? Edit: Deleting the "applicationSetting"-Tags does not work either.

    Read the article

  • Unable to import Eclipse project to Android studio

    - by Binoy Babu
    Whenever I try to import my Eclipse project to Android Studio I get the following error : You are using an old, unsupported version of Gradle. Please use version 1.8 or greater. Please point to a supported Gradle version in the project's Gradle settings or in the project's Gradle wrapper (if applicable.) Consult IDE log for more details (Help | Show Log) Im using Android Studio 0.3 and Ubuntu, I also tried it on a Windows 8 box with fresh install but getting the same error. I'm using default gradle wrapper and I tried checking and unchecking auto import option. Is this a bug? How can I get around it. How do I update gradle to 1.8 or check the current gradle version? My build.gradle is given below. buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.3' // I also tried using 0.6.1 and 0.5.+ } } apply plugin: 'android' dependencies { compile fileTree(dir: 'libs', include: '*.jar') } android { compileSdkVersion 18 buildToolsVersion "18.0.1" sourceSets { main { manifest.srcFile 'AndroidManifest.xml' java.srcDirs = ['src'] resources.srcDirs = ['src'] aidl.srcDirs = ['src'] renderscript.srcDirs = ['src'] res.srcDirs = ['res'] assets.srcDirs = ['assets'] } // Move the tests to tests/java, tests/res, etc... instrumentTest.setRoot('tests') // Move the build types to build-types/<type> // For instance, build-types/debug/java, build-types/debug/AndroidManifest.xml, ... // This moves them out of them default location under src/<type>/... which would // conflict with src/ being used by the main source set. // Adding new build types or product flavors should be accompanied // by a similar customization. debug.setRoot('build-types/debug') release.setRoot('build-types/release') } }

    Read the article

  • CS Master's Degree Project vs. Thesis options

    - by Nwosh
    I'm doing a master's degree in computer science, and I'm currently at the point where I have to decide between the thesis and non-thesis options offered by my university. The thesis option was my first choice, it entails taking less courses but tends to take more time doing your thesis. The non-thesis option involves taking more coursework, taking a comprehensive exam, and doing a project in one semester with a faculty member. I'd like to pursue a PhD degree eventually (although not right away, I want to get some years of professional experience first), and I heard that having demonstrated the ability to work on a thesis helps a lot with admission (like: not doing thesis raises questions and suggests not being interested in research) and that the experience itself is very good. At the same time, almost everyone I know who did a thesis at my university took a long time (2-3 years), in theory it could be done in 1.5 years. I'm a part time student and I don't really want to spend so much time just getting a master's degree, I could still publish a few papers while working on the project option and I'd be done in a year or so, additionally, I heard having a master's degree with a project and more coursework is more desirable for the industry. So, when applying for a PhD degree in CS at some of the better universities, would the time spent working on the master's thesis help in getting me accepted? Or should I opt for the non-thesis option and hope that the extra coursework and publishing some papers would make up for not working on a thesis?

    Read the article

  • C++ errors not shown in Visual Studio C# project

    - by Diana
    I have in Visual Studio 2008 a .NET 3.5 C# project that uses a dll compiled from a C# project (let's call it dll A). Dll A is using on his turn some C++ libraries. The problem is that when I encounter an error while calling objects from dll A, the application just closes, without showing any error. But I need to know what's the problem, I cannot just guess and go blind all along the project with this... I checked Window's event log, could not find anything. I checked the settings of throwing errors in Visual Studio, in menu Debug - Exceptions, all of them are checked (including C++ exceptions), so, any errors should be thrown. My code looks something like this: tessnet2.Tesseract tessocr = new tessnet2.Tesseract(); tessocr.Init(@"s:\temp\tessdata", "eng", false); tessocr.GetThresholdedImage(bmp, Rectangle.Empty).Save("s:\\temp\\" + Guid.NewGuid().ToString() + ".bmp"); List<tessnet2.Word> words = ocr.DoOCR(bmp, "eng"); //App exits at this line If I put in my code something like int x = Convert.ToInt32("test"); this should throw an error. And it throws, and Visual Studio shows it. Does anyone having any idea why the errors are not being shown? Or where else could be registered? Any help is very appreciated! Thanks!

    Read the article

  • MonoRail: Testing, Route Extensions, Folder Structures

    - by Kezzer
    I've got a few questions related to the use of MonoRail Testing Does everyone tend to use NUnit for their testing? I haven't worked enough with testing to know if this is a good testing framework to use. I'm just looking to get more into testing my applications a lot more than before and wanted to know if there's any general guidelines. Are you supposed to copy the controller over to a test area and just rename it with test in the name and re-run it? How do you ensure your test project and main project coincide with one another? Is it just a case of copying everything over again or are there tools available to do it for you? Route Extensions MonoRail tends to use <action>.rails, can you omit the .rails part if you configure your routing correctly? Why does this seem to be the standard? Folder Structures I haven't found anywhere which really points out your standard folder structure. Sure, you have Controllers, Models, and Views. But your Models folder should contain your data access objects as well. I've seen some have something like -> Models -> DaoClasses -> Entities But what about custom structures used to get data out of views? And if you're using NHibernate, where's a good place to stick the mappings? I know it's entirely dependent on the developer, but I haven't really seen any standard approach. Cheers

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >