Search Results

Search found 13302 results on 533 pages for 'common practice'.

Page 69/533 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Uploading file to DropBox causing UnlinkedException

    - by Boardy
    I am currently working on android project and trying to enable DropBox functionality. I've selected the access type to App Only, and I can successfully authenticate and it creates an Apps directory and inside that creates a directory with the name of my app. When I then try to put a file in the DropBox directory it goes into the DropBoxException catch and in the logcat prints com.dropbox.client2.exception.DropboxUnlinkedException. I've done a google and from what I've seen this happens if the apps directory has been deleted and so authentication is required to be re-done, but this isn't the case, I have deleted it and I am putting the file straight after doing the authentication. Below is the code that retrieves the keys and stores the file on dropbox. AccessTokenPair tokens = getTokens(); UploadFile uploadFile = new UploadFile(context, common, this, mDBApi); uploadFile.execute(mDBApi); Below is the code for the getTokens method (don't think this would help but you never know) private AccessTokenPair getTokens() { AccessTokenPair tokens; SharedPreferences prefs = context.getSharedPreferences("prefs", 0); String key = prefs.getString("dropbox_key", ""); String secret = prefs.getString("dropbox_secret", ""); tokens = new AccessTokenPair(key, secret); return tokens; } Below is the class that extends the AsyncTask to perform the upload class UploadFile extends AsyncTask<DropboxAPI<AndroidAuthSession>, Void, bool> { Context context; Common common; Synchronisation sync; DropboxAPI<AndroidAuthSession> mDBApi; public UploadFile(Context context, Common common, Synchronisation sync, DropboxAPI<AndroidAuthSession> mDBApi) { this.context = context; this.common = common; this.sync = sync; this.mDBApi = mDBApi; } @Override protected bool doInBackground(DropboxAPI<AndroidAuthSession>... params) { try { File file = new File(Environment.getExternalStorageDirectory() + "/BoardiesPasswordManager/dropbox_sync.xml"); FileInputStream inputStream = new FileInputStream(file); Entry newEntry = mDBApi.putFile("android_sync.xml", inputStream, file.length(), null, null); common.showToastMessage("Successfully uploade Rev: " + newEntry.rev, Toast.LENGTH_LONG); } catch (IOException ex) { Log.e("DropBoxError", ex.toString()); } catch (DropboxException e) { Log.e("DropBoxError", e.toString()); e.printStackTrace(); } return null; } I have no idea why it would display the UnlinkedException so any help would be greatly appreciated.

    Read the article

  • What popular "best practices" are not always best, and why?

    - by SnOrfus
    "Best practices" are everywhere in our industry. A Google search on "coding best practices" turns up nearly 1.5 million results. The idea seems to bring comfort to many; just follow the instructions, and everything will turn out fine. When I read about a best practice - for example, I just read through several in Clean Code recently - I get nervous. Does this mean that I should always use this practice? Are there conditions attached? Are there situations where it might not be a good practice? How can I know for sure until I've learned more about the problem? Several of the practices mentioned in Clean Code did not sit right with me, but I'm honestly not sure if that's because they're potentially bad, or if that's just my personal bias talking. I do know that many prominent people in the tech industry seem to think that there are no best practices, so at least my nagging doubts place me in good company. The number of best practices I've read about are simply too numerous to list here or ask individual questions about, so I would like to phrase this as a general question: Which coding practices that are popularly labeled as "best practices" can be sub-optimal or even harmful under certain circumstances? What are those circumstances and why do they make the practice a poor one? I would prefer to hear about specific examples and experiences.

    Read the article

  • Why use try … finally without a catch clause?

    - by Nick Rosencrantz
    The classical way to program is with try / catch but when is it appropriate to use try without catch? In Python the following appears legal and can make sense: try: #do work finally: #do something unconditional But we didn't catch anything. Similarly one could think in Java it would be try { //for example try to get a database connection } finally { //closeConnection(connection) } It looks good and suddenly I don't have to worry about exception types etc. But if this is good practice, when is it good practice? Or reasons why this is not good practice or not legal (I didn't compile the source I'm asking about and it could be a syntax error for Java but I checked that the Python surely compiles.) A related problem I've run into is that I continue writing the function / method and at the end I must return something and I'm in a place which should not be reached and it must be a return point so even if I handle the exceptions above I'm still returning null or an empty string at some point in the code which should not be reached, often the end of the method / function. I've always managed to restructure to code so that I don't have to return null since that absolutely appears to look like less than good practice.

    Read the article

  • Unable to setup postgres on ubuntu (8.4 on 9.10)

    - by shabda
    I am trying to setup postgres on ubuntu, and I cant proceed as I cant find the location of pg_hba.conf (Postgres 8.4 on ubuntu 9.10) What I did setup postgres via aptitude This gave me ... /usr/share/postgresql/8.4# aptitude install postgresql Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libreadline5{a} postgresql postgresql-8.4{a} postgresql-client-8.4{a} postgresql-client-common{a} postgresql-common{a} 0 packages upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,159kB of archives. After unpacking 18.8MB will be used. Do you want to continue? [Y/n/?] Y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package libreadline5. (Reading database ... 17490 files and directories currently installed.) Unpacking libreadline5 (from .../libreadline5_5.2-6_amd64.deb) ... Selecting previously deselected package postgresql-client-common. Unpacking postgresql-client-common (from .../postgresql-client-common_101_all.deb) ... Selecting previously deselected package postgresql-client-8.4. Unpacking postgresql-client-8.4 (from .../postgresql-client-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql-common. Unpacking postgresql-common (from .../postgresql-common_101_all.deb) ... Selecting previously deselected package postgresql-8.4. Unpacking postgresql-8.4 (from .../postgresql-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql. Unpacking postgresql (from .../postgresql_8.4.1-1_all.deb) ... Processing triggers for man-db ... Setting up libreadline5 (5.2-6) ... Setting up postgresql-client-common (101) ... Setting up postgresql-client-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode. Setting up postgresql-common (101) ... Setting up postgresql-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.1-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done tried to login as su - postgres Followed by createdb mytestdb This failed with could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? So I though I need to enable local connections in pg_hba.conf, which I cant find. So I created a new file as sudo vim /etc/postgresql/8.4/main/pg_hba.conf With values local all all ident But I still cant login after restarting the service. What are next steps for me to take.

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • Suddenly unable to export to war file from exclipse

    - by Codeguy007
    I been working on this Servlet project all morning and now suddenly I cannot get eclipse to export the project to a war file. I tried restarting eclipse and cleaning the project but I just get the same result. Any ideas? org.eclipse.core.runtime.CoreException: Extended Operation failure: org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation at org.eclipse.wst.common.frameworks.internal.datamodel.ui.DataModelWizard.performFinish(DataModelWizard.java:182) at org.eclipse.jface.wizard.WizardDialog.finishPressed(WizardDialog.java:742) at org.eclipse.jface.wizard.WizardDialog.buttonPressed(WizardDialog.java:373) at org.eclipse.jface.dialogs.Dialog$2.widgetSelected(Dialog.java:618) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:227) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:66) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:938) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3682) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3293) at org.eclipse.jface.window.Window.runEventLoop(Window.java:820) at org.eclipse.jface.window.Window.open(Window.java:796) at org.eclipse.ui.actions.ExportResourcesAction.run(ExportResourcesAction.java:180) at org.eclipse.ui.actions.BaseSelectionListenerAction.runWithEvent(BaseSelectionListenerAction.java:168) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:546) at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:490) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:402) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:66) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:938) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3682) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3293) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2389) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2353) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2219) at org.eclipse.ui.internal.Workbench$4.run(Workbench.java:466) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:289) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:461) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:106) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:169) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:106) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:76) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:363) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:176) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:508) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:447) at org.eclipse.equinox.launcher.Main.run(Main.java:1173) org.eclipse.core.runtime.CoreException[0]: org.eclipse.core.commands.ExecutionException: Error exportingWar File at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:103) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl$1.run(DataModelPausibleOperationImpl.java:376) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1797) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:401) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:352) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.doExecute(DataModelPausibleOperationImpl.java:242) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.executeImpl(DataModelPausibleOperationImpl.java:214) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.cacheThreadAndContinue(DataModelPausibleOperationImpl.java:89) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.execute(DataModelPausibleOperationImpl.java:202) at org.eclipse.wst.common.frameworks.internal.datamodel.ui.DataModelWizard$1$CatchThrowableRunnableWithProgress.run(DataModelWizard.java:211) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:113) Caused by: org.eclipse.jst.j2ee.commonarchivecore.internal.exception.SaveFailureException: IWAE0017E Unable to replace original archive: C:\Users\mark\uploads\myfirstjsp.war at org.eclipse.jst.j2ee.commonarchivecore.internal.impl.ArchiveImpl.cleanupAfterTempSave(ArchiveImpl.java:322) at org.eclipse.jst.j2ee.commonarchivecore.internal.impl.ArchiveImpl.saveAsNoReopen(ArchiveImpl.java:1182) at org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation.export(WebComponentExportOperation.java:54) at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:95) ... 10 more

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • What are the common Linux commands for SAN-related activities? How do I check if a LUN is attached to the computer?

    - by Nishant
    How do I check if a LUN has been presented to my server? What are the Linux commands for that? Do the LUNs show up in a fdisk -l command like a normal /dev/sda gets listed? What are other commands associated with general SAN related checks in Linux? What is WWN and how does that have any relevance? If we have LUNs, what is the use of multipathing? Bit lengthy but I am not able to get a grasp on the topic. Any help would be appreciated.

    Read the article

  • What are the common linux ( RH ) commands for SAN related activities ? How to check if a LUN is attached to the computer ?

    - by Nishant
    How do I check if a LUN has been presented to my server ? What are the Linux commands for that ? Do the LUNS show up in a fdisk -l command like a normal /dev/sda gets listed ? What are other commands assosicaed with general SAN related checks in Linux ? What is WWN and how does that have any relevance and Also please explain multipathing why if we have LUN's , what is the use of multipathing then ? Bit lenghty but I am not able to get a grasp on the topic . Any help would be appreciated .

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • JavaScript Intellisense with Telerik in ASP.NET Master Page Project with VS 2010

    - by Otto Neff
    Today I was looking for a solution to get finally the JScript/Javascript/jQuery Intellisense Featureworking with my ASP.Net Webform Project to work. I found some good articles: - JScript IntelliSense Overview- JScript IntelliSense: A Reference for the “Reference” Tag- Enabling JavaScript intellisense in VS.NET 2010 to work with SharePoint 2010- Rich IntelliSense for jQueryBUT, all of suggested solutions did not work right with my Master Page based Visual Studio 2010 Solution.Only with physical Javascript Files (Telerik includes certain Javascript Files like jQuery as Ressource) or/andconfigure always a new ASP.NET Scriptmanager / RadScriptManager on every page derived from the Master Page, wasn't exactly what I was looking for. So I came up with the following simple Solution, to Trick VS2010and still have the Project running with multiple runat="server" Scriptmanagers. In short:- New ASP.NET control derived from ScriptManager with emtpy overwritten OnInit() to use it as emtpy wrapper for VS2010. In detail:New RadScriptManager Classusing System; using System.Collections.Generic; using System.Linq; using System.Web; using Telerik.Web.UI; namespace IntellisenseJavascript.Controls { public class IntelliJS : RadScriptManager { protected override void OnInit(EventArgs e) { } protected override void OnPreRender(EventArgs e) { } protected override void OnLoad(EventArgs e) { } protected override void Render(System.Web.UI.HtmlTextWriter writer) { } public override void RenderControl(System.Web.UI.HtmlTextWriter writer) { } } } web.config<configuration> ... <system.web> ... <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4"/> <add tagPrefix="VSFix" namespace="IntellisenseJavascript.Controls" assembly="IntellisenseJavascript"/> </controls> </pages> ... Master Page<%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="IntellisenseJavascript.Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="head" runat="server"> <title></title> <telerik:RadStyleSheetManager ID="radStyleSheetManager" runat="server" /> </head> <body> <form id="form" runat="server"> <telerik:RadScriptManager ID="radScriptManager" runat="server"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </telerik:RadScriptManager> <telerik:RadAjaxManager ID="radAjaxManager" runat="server"> </telerik:RadAjaxManager> <div> #MASTER CONTENT# <asp:ContentPlaceHolder ID="contentPlaceHolder" runat="server"> </asp:ContentPlaceHolder> </div> </form> <script type="text/javascript"> $(function () { // Masterpage ready $('body').css('margin', '50px'); }); </script> </body> </html> ASPX Page<%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="IntellisenseJavascript.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="contentPlaceHolder" runat="server"> <VSFix:IntelliJS runat="server" ID="intelliJS"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </VSFix:IntelliJS> <div style="border: 5px solid #FF9900;"> #PAGE CONTENT# </div> <script type="text/javascript"> $(function () { // Page ready $('body').css('border', '5px solid #888'); }); </script> </asp:Content> The Result I know, this is not the way it meant to be... but now at least you can have a Main ScriptManager for all Common Scripts and Settings, inject page specific Javascripts in PageLoad Event in normal ASPX Files and have JavaScript Intellisense for defined Scripts from JS Files or Assembly Ressouce in your Content Maybe, vNext will fix this.

    Read the article

  • How can i find touch typing lesson for words with middle row only

    - by user1838032
    I am learning touch typing. i want practice step by step. Is there any site where i can have the options of the keys to select and then have lesson for those slected keys only. I means i select the keys from keyboard and then system prepares the lesson for only those keys with random combination. Current i want to practice keys asdf gh jkl; Now i am not able to find practice for that whole row only. i mena random combinatins

    Read the article

  • What is a practical way to debug Rails?

    - by Joshua Fox
    I get the impression that in practice, debuggers are rarely used for Rails applications. (Likewise for other Ruby apps, as well as Python.) We can compare this to the usual practice for Java or VisualStudio programmers--they use an interactive debugger in a graphical IDE. How do people debug Rails applications in practice? I am aware of the variety of debuggers, so no need to mention those, but do serious Rails programmers work without them? If so, why do you choose to do it this way? It seems to me that console printing has its limits when debugging complex logic.

    Read the article

  • Should I reuse variables?

    - by IAdapter
    Should I reuse variables? I know that many best practice say you should not do it, however later when different developer is debugging the code and have 3 variables that look a like and only difference is that they are created in different places in the code he might be confused. unit-testing is a great example of this. However I do know that best practice are most of the time against it. For example they say not to "overide" method parameters. Best practice are even are against nulling the previous variables (in Java there is Sonar that has warning when you assign null to variable that you don't need to do it to call garbage collector since Java6. you cant always control what warnings are turned off, most of the time the default is on)

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • how to ask questions about bad practices in stackoverflow ( or other technical forums) [migrated]

    - by Nahum Litvin
    I had a case when I needed to do something in code that I knew is a bad practice. but because of a unique situation and after considering the risks thoroughly decided that is worth it. I cannot start explaining all my considerations that include buisness secrets over the internet but I do need technical assistance. when I tried to ask at SA I got heated responses why it is a bad practice instead of answeres to how to do this. poeple are so conserned about what is the right way to write code that they forget that there are other considerations as well. can anyone provide insight of how to correctly ask such a question in order to avoid "this is a bad practice" answers and get real answers?

    Read the article

  • Multiple sites redirected to one main site

    - by mattgcon
    I have a client who insists of having multiple website domains all being redirected to one main website domain. It is getting out of hand and his server has become conveluted and riddled with garbage because of it, not to mention confusing at times. Each of these domains that he is setting up has no content, they simply redirect the user to the main website domain. Is this practice of having multiple domains pointing to one main website common? And does anyone know where I can get information to give to this client to let him know this is a bad practice if it is a bad practice?

    Read the article

  • Global Day of Coderetreat

    - by Tori Wieldt
    From the coderetreat.org website: Coderetreat is a day-long, intensive practice event, focusing on the fundamentals of software development and design. By providing developers the opportunity to take part in focused practice away from the pressures of 'getting things done', the coderetreat format has proven itself to be a highly effective means of skill improvement. This year, the Global Day of Coderetreat is happening on December 8. It sounds cool and fun, and of course, Java Champions and Java developers around the world are involved. Here's a small sampling: Chennai, India São Paulo, Brazil Skopje, Macedonia Kraków, Poland You can go to http://globalday.coderetreat.org/  to look up events near you. It's a great opportunity to practice your craft. Here's a video from an event last year to get a flavor:

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Visual Studio: Add Item / Add as link rather than just Add

    - by Pete d'Oronzio
    I'm new to visual studio, coming from Delphi. I have a directory tree full of .cs files (root is \Common). I also have a directory tree full of Applications (root is \Applications) Finally, I've got a tree full of Assemblies (root is \Assemblies) I'd like to keep my .cs files in the Common tree and all the environment voodoo (solutions, projects, settings, metadata, debug data, bin, etc.) in the Assmblies tree. So, for a simple example, I've got an assembly called PdMagic.Common.Math.dll. The Solution and project is located in \Assemblies\Common\Math. All of its source (.cs) files are in \Common\Math. (matrix.cs, trig.cs, mathtypes.cs, mathfuncs.cs, stats.cs, etc.) When I use Add Existing Item to add matrix.cs to my project, a copy of it is added to the \Assemblies\Common\Math folder. I just want to reference it. I don't want multiple copies laying around. I've tried Add Existing Item, and used the drop down to "Add link" rather than just "Add", and that seems to do what I want. Question: What is the "best practice" for this sort of thing? Do most people just put those .cs files all in the same folder as the project? Why isn't "Add link" the default? Thanks!

    Read the article

  • Compiling Java code in terminal having a Jar in CLASSPATH

    - by Masi
    How can you compile the code using javac in a terminal by using google-collections in CLASSPATH? Example of code trying to compile using javac in a terminal (works in Eclipse) import com.google.common.collect.BiMap; import com.google.common.collect.HashBiMap; public class Locate { ... BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ... } Compiling in terminal src 288 % javac Locate.java Locate.java:14: package com.google.common.collect does not exist import com.google.common.collect.BiMap; ^ Locate.java:15: package com.google.common.collect does not exist import com.google.common.collect.HashBiMap; ^ Locate.java:153: cannot find symbol symbol : class BiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ Locate.java:153: cannot find symbol symbol : variable HashBiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ 4 errors My CLASSPATH src 289 % echo $CLASSPATH /u/1/bin/javaLibraries/google-collect-1.0.jar

    Read the article

  • Using MSADO15.DLL and C++ with MinGW/GCC on Windows Vista

    - by Eugen Mihailescu
    INTRODUCTION Hi, I am very new to C++, is my 1st statement. I have started initially with VC++ 2008 Express, I've notice that GCC becomes kind of standard so I am trying to make the right steps event from the beginning. I have written a piece of code that connects to MSSQL Server via ADO, on VC++ it's working like a charm by importing MSADO15.dll: #import "msado15.dll" no_namespace rename("EOF", "EndOfFile") Because I am going to move from VC++ I was looking for an alternative (eventually multi-platform) IDE, so I stick (for this time) with Code::Block (I'm using last nightly buil, SVN 6181). As compiler I choose to use GCC 3.4.5 (ported via MinGW 5.1.6), under Vista. I was trying to compile a simple "hello world" application with GCC that use/import the same msado15.dll (#import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename("EOF", "EndOfFile")) and I was surprised to see a lot of compile-time errors. I was expected that the #import compiler's directive will generate a library from "msado15.dll" so it can link to it later (link-edit time or whatever). Instead it was trying to read it as a normal file (like a header file,if you like) because it was trying to interprete each line in the DLL (which has a MZ signature): Example: Compiling: main.cpp E:\MyPath\main.cpp:2:64: warning: extra tokens at end of #import directive In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\144' in program In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1:4: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\3' in program c:\Program Files\Common Files\System\ADO\msado15.dll:1:6: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\4' in program ... and so on. MY QUESTION Well, it is obvious that under this version of GCC the #import directive does not do the expected job (perhaps #import is not supported anymore by GCC), so finally my question: how to use the ADO to access MSSQL database on a C++ program compiled with GCC (v3.4.5)?

    Read the article

  • How to Import XML generated by TFPT into Excel 2007?

    - by keerthivasanp
    Below is the xml content generated by TFPT for the WIQL issued. When I try to import this XML into Excel 2007 XML source pane shows only "Id", "RefName" and "Value" as fields to be mapped. Whereas I would like to display System.Id, System.State, Microsoft.VSTS.Common.ResolvedDate, Microsoft.VSTS.Common.StateChangeDate as column headings and corresponding value as rows. Any idea how to achieve this using XML import in Excel 2007. <?xml version="1.0" encoding="utf-8"?> <WorkItems> <WorkItem Id="40100"> <Field RefName="System.Id" Value="40100" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/17/2010 9:39:04 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 9:15:32 PM" /> </WorkItem> <WorkItem Id="44077"> <Field RefName="System.Id" Value="44077" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/1/2010 4:26:47 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 7:32:12 PM" /> </WorkItem> </WorkItems> Thanks

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >