Search Results

Search found 10194 results on 408 pages for 'raw types'.

Page 74/408 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Getting MTP to work with a Galaxy tab 2 7.0?

    - by Wouter
    I'm trying to get MTP with the galaxy tab 2 7.0 working on my ubuntu installation. Such that I can access the files. I tried to do what is described here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access I however fail at executing one of the following commands mtp-detect | grep idVendor mtp-detect | grep idProduct This fails [20:42|0] $ mtp-detect | grep idVender Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 [20:44|0] $ mtp-detect | grep idProduct Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 Now my guess was was that the idVender is the same as the VID (04e8) and the idProduct is the same as PID (6860) Now I continued to work with those values and completed the tutorial. When finished I tried android-connect This returned fuse: bad mount point `/media/GalaxyTab': Transport endpoint is not connected Does anybody have a clue what to do? Also I want to note that when I connect my GalaxyTab 2 7.0 that I still get a pop-up of ubuntu that a device was connected. I also can still see the mapstructure, the problem however is is that all the folders have 0 bytes and do not have any subfolders. I can only see the folders in the root. ps. I also checked a similar question and tried what is described in this answer http://askubuntu.com/a/88630/27480

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Where to implement storable items

    - by James Hay
    I'm creating a multiplayer online trading game. The things that are traded range from raw items to complex products. For example Steel is a raw item. Mechanical Assembly is a more complex item that requires 2x Steel and maybe 1x Rubber. Then Hydraulics is an item that contains 2x Mechanical Assemblies and 1x Electronics (which is another complex item). So and so forth. These items will be created by me, players can't create their own items, so it doesn't need to be able to handle arbitrary layers of complexity for items. If my example isn't clear, think Minecraft. You have wooden planks, which can be made into sticks. From there the sticks - combined with metals - can be made into tools. My game is nothing to do with minecraft or any sandbox building game, but it uses a similar progressive complexity to creating items that I want to have in my game. My question is basically, how do you store something like this assuming that I will want to add more items in the future? Do you store it in a database or in a seperate library that the game uses? EDIT None of the items actually "do" anything, they are simply there to either sell, purchase, or combine with other items to make a more complex item, which can then be sold, purchased or combined... you get the idea. The items themselves would not have any properties, but the instances of the items would. For example an item that one player has would have a certain "quality" and if they were selling it a certain "price". An instance of that same item that a different player had would need to have a different "quality" and "price" if they were selling it. I think the price part will not be required on an individual item because instead I would have a "sale" object which was for a price and contained certain items.

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • ????: Oracle ASM???????????·?????????

    - by Kumiko Fujita
    ??????????????????????????????????????????????????????????????????????????????????????????????????? * * * ??????????(?)????????????????????????Oracle Database?????????????????????????????????????????????????????????·????????????????ASM(Automatic Storage Management)???????????????????? ???????????????MD(??????????)???????MD???????????????????????????·????????·???????????????????????????????????????24??/365??????????????? ??????????????? ?????????????????????????????RAC????????????????????ASM?????????? OS:UNIX Database:Oracle Database 11g Enterprise Edition 11.2.0.1 Real Application Clusters(2???RAC)?Partitioning Option ????????2.5TB???SAN????????ASM????????????????? ????????? ???? ???????? ??? ?? DG01 NORMAL 4MB 5GB OCR?????? DG02 ????? 4MB 60GB ??????REDO???spfile DG03 ????? 4MB 60GB ??????REDO?? DG04 ????? 4MB 178GB ????? DG05 ????? 4MB 2670GB ?????? ???????RAID 0+1 ????????????????????????????????????????? (ASM?) ????????????????????? http://otndnld.oracle.co.jp/products/database/oracle10g/availability/pdf/asm_best_practices0907-fujitsu_jp.pdf ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????REDO????????????RAID 0+1?????????????????????2??????????????????????????????????????????????????????????????????????????????????????????????SAN????????????????????????????????????????????????????????????????????????OS???????????????????????????????2???????????????????????????????? ASM?????????????????????????????????????????????????????RAW???????????????????????????????????????????????????????????????????????????????????????DB???????????????????????????????????????????????????????????????????????????????????????????????? ASM??????????????????????????????????????????????????????????????????????????????????????????????????????DB???????DBA????????????ASM????????????????????? ??????????ASM??????????????????????????????????????????????????????????????RAW??????????????????????·?????????????????????????????????????????????????????????? Oracle Database 11gR2????ASM?Clusterware?GRID Infrastructure???????????????????????OCR(Oracle Cluster Registry)?ASM??????????????????????????????????????????????ASM??????????DBA???????????·??????????????????????????????

    Read the article

  • Excel get_Range missing when interop assembly is embedded in .NET 4.0

    - by mikemay
    I build an assembly referencing a COM interop DLL. If I embed the COM interop types by setting Embed Interop Types to True in the Reference's properties (VS2010), at run-time an error occurs "object does not contain a definition for get_Range". If COM interop types are not embedded then no error occurs. Does anyone know why a particular method, Worksheet.get_Range should be ommitted or how to work around this or have any other relevant insights? I should be grateful for any help. The interop dll contains a reference to Worksheet.get_Range(object, [object]). Using reflector on my calling assembly there is no mention of get_Range under Worksheet. The interop assembly I am embedding is generated from Excel9.olb. I am not using PIAs as the application targets multiple Excel versions.

    Read the article

  • WSDL Object model

    - by Swaroop
    I'm using the WSDL object model (WOM) along with XSOM for a project of mine. The WOM gives me a way to drill down and look at messages and the message types which are element declarations. However, I am unable to find a way to parse the simple and complex types. The APIs are tricky. There seems to be some kind of a connection between WOM and XSOM. I'd really appreciate it if you could tell me how I can parse the simple and the complex types in my .wsdl file.

    Read the article

  • Higher-order type constructors and functors in Ocaml

    - by sdcvvc
    Can the following polymorphic functions let id x = x;; let compose f g x = f (g x);; let rec fix f = f (fix f);; (*laziness aside*) be written for types/type constructors or modules/functors? I tried type 'x id = Id of 'x;; type 'f 'g 'x compose = Compose of ('f ('g 'x));; type 'f fix = Fix of ('f (Fix 'f));; for types but it doesn't work. Here's a Haskell version for types: data Id x = Id x data Compose f g x = Compose (f (g x)) data Fix f = Fix (f (Fix f)) -- examples: l = Compose [Just 'a'] :: Compose [] Maybe Char type Natural = Fix Maybe -- natural numbers are fixpoint of Maybe n = Fix (Just (Fix (Just (Fix Nothing)))) :: Natural -- n is 2 -- up to isomorphism composition of identity and f is f: iso :: Compose Id f x -> f x iso (Compose (Id a)) = a

    Read the article

  • How to perform insert and update with Ado.net dataservices (EF and Inheritance)

    - by Thurein
    Hi, I have an entity model, in which I have a table per type inheritance. There are 3 types, first, Contact, which I defined as abstract in my EF model and the rest are Company and person types which are derived from contact type. Is it possible to perform an insert using ado.net dataservice and asp.net ajax library? I was trying the following client code : dataContext.insertEntity(person, "Contacts"); I was getting this response from server : Error processing request stream. Type information must be specified for types that take part in inheritance. Thanks.

    Read the article

  • wsimport and header params for logging

    - by Milan
    I have this situation. Generating form based on the WSDL. I made it but I came to the situation when the wsimport tool generates classes with methods with params for header(for authentication) and the params are not just simple types. But some complex. The problem is that I dont know which classes will be generated so I need simple types for the methods. @WebMethod(operationName = "DNSLookup", action = "http://www.strikeiron.com/DNSLookup") @WebResult(name = "DNSLookupResult", targetNamespace = "http://www.strikeiron.com") @RequestWrapper(localName = "DNSLookup", targetNamespace = "http://www.strikeiron.com", className = "invoker.DNSLookup") @ResponseWrapper(localName = "DNSLookupResponse", targetNamespace = "http://www.strikeiron.com", className = "invoker.DNSLookupResponse") public SIWsOutputOfDNSInfo dnsLookup( @WebParam(name = "HostNameOrIPAddress", targetNamespace = "http://www.strikeiron.com") String hostNameOrIPAddress, @WebParam(name = "LicenseInfo", targetNamespace = "http://ws.strikeiron.com", header = true, partName = "LicenseInfo") LicenseInfo licenseInfo, @WebParam(name = "SubscriptionInfo", targetNamespace = "http://ws.strikeiron.com", header = true, mode = WebParam.Mode.OUT, partName = "SubscriptionInfo") Holder<SubscriptionInfo> subscriptionInfo); you can see that LicenseInfo licenseInfo, and Holder subscriptionInfo);? Is it possible to somehow to specify to have simple types for header params?

    Read the article

  • Generating new sources via Maven plugin after compile phase

    - by japher
    I have a Maven project within which I need execute two code generation steps. One generates some Java types, then the second depends on those Java types to generate some more code. Is there a way to have both of these steps happening during my build? At the moment my steps are: execute first code generation plugin (during generate-sources) add directory of generated types to build path execute second code generation plugin (during compile) However my problem is that anything generated by the second code generation plugin will not be compiled (because the compile phase has finished). If I attach the second code generation plugin to an earlier phase, it fails because it needs the classes from the first code generation plugin to be present on the classpath. I know I could split this into two modules with one dependent on the other, but I was wondering if this could be achieved in one pom. It seems like a need a way to invoke compile again after the normal compile phase is complete. Any ideas?

    Read the article

  • When should you NOT use the asterisk (*) when declaring a variable in Objective C

    - by Jason
    I have just started learning objective c and the asterisk is giving me some trouble. As I look through sample code, sometime it is used when declaring a variable and sometimes it is not. What are the "rules" for when it should be used. I thought it had something to do with the data type of the variable. (asterisk needed for object data types, not needed for simple data types like int) However, I have seen object data types such as CGPoint declared without the asterisk as well? Is there a definitive answer or does it have to do with how and what you use the variable for?

    Read the article

  • php selecting hash using wildcards

    - by tipu
    Say I have a hashmap, $hash = array('fox' => 'some value', 'fort' => 'some value 2', 'fork' => 'some value again); I am trying to accomplish an autocomplete feature. When the user types 'fo', I would like to retrieve, via ajax, the 3 keys from $hash. When the user types 'for', I would like to only retrieve the keys fort and fork. Is this possible? What I was thinking was using binary search to isolate the keys with 'f', instead of brute-force searching. Then continue eliminating the indexes as the user types out their query. Is there a more efficient solution to this?

    Read the article

  • Java variadic function parameters

    - by Amir Rachum
    Hi, I have a function that accepts a variable number of parameters: foo (Class... types); In which I get a certain number of class types. Next, I want to have a function bar( ?? ) That will accepts a variable number of parameters as well, and be able to verify that the variables are the same number (that's easy) and of the same types (the hard part) as was specified in foo. How can I do that? Edit: to clarify, a call could be: foo (String.class, Int.class); bar ("aaa", 32); // OK! bar (3); // ERROR! bar ("aa" , "bb"); //ERROR! Also, foo and bar are methods of the same class.

    Read the article

  • Uploadify refuses to upload WMV, FLV and MP4 files

    - by Jon Winstanley
    The uploadify plugin for JQuery seems very good and works for most file types. However, it allows me to upload all file types apart from the ones I need! Namely .WMV, .FLV and .MP4 I have googled the issue and not found anyonw having such difficulties. I have already tried changing the fileExt parameter and also tried removing it altogether. I have testing in Google Chrome, IE7 and Firefox and none work for thes efile types. Is there a known reason for this behaviour?

    Read the article

  • Use of serialization in JMX calls on Websphere Appserver to avoid ClasscastException

    - by hstoerr
    We are using JMX for communication between different EARs on the same Websphere application server (6.1). All works well if we only use Java types as arguments, but if we use or own the problem is that we get ClassCastExceptions on the receiver side. This is obviously a classloader problem: if the jar with the argument types is put into the JRE endorsed directory, such that all classloaders use exactly the same class, the Exception disappear. But we would much prefer to put the library that defines the argument types in the EAR itself. Now my question: is there a trick to persuade WAS to serialize and deserialize the arguments during the JMX call? I guess in this case the ClassCastException would dissappear.

    Read the article

  • Optimizing if-else /switch-case with string options

    - by cc
    What modification would bring to this piece of code? In the last lines, should I use more if-else structures, instead of "if-if-if" if (action.equals("opt1")) { //something } else { if (action.equals("opt2")) { //something } else { if ((action.equals("opt3")) || (action.equals("opt4"))) { //something } if (action.equals("opt5")) { //something } if (action.equals("opt6")) { //something } } } Later Edit: This is Java. I don't think that switch-case structure will work with Strings. Later Edit 2: A switch works with the byte, short, char, and int primitive data types. It also works with enumerated types (discussed in Classes and Inheritance) and a few special classes that "wrap" certain primitive types: Character, Byte, Short, and Integer (discussed in Simple Data Objects ).

    Read the article

  • Parse Complex WSDL Parameter Information in C#

    - by jaws
    I am attempting to parse WSDL, along the lines of the example given here. The author notes, in the comments, that the example is not capable of drilling down into complex data types. And in fact, when I run the example, it does not appear to even handle simple data types. I have poked around in System.Web.Services.Description.ServiceDescription class, which is used in the example, but cannot find any actual parameter or return type information at run-time. I gather that I may need to do some manual parsing of an xsd file? Both google and stackoverflow appear to lack a complete example of how to drill down into complex types programmatically, so... how should I do this?

    Read the article

  • Dissapearing object function??

    - by WmasterJ
    Is there a reason for object functions to be unset or deleted or simply not applied for any reason at all that isn't intentional? I am maintaining someone elses code and gone through it many times. I use Google Chromes awesome debuger and also TextMate. These help me find the origin of error relatively fast. The problem I have now is that i have an object: types. This object conatains...types. And these types have functions and other variables attached to them. For some reason in the middle of the code, this type has been passed by reference millions of times probably. When it comes to a certain part of the code parts of it, seem to have dissapeared. Puff! And it's gone..! Anyone have a clue (other than it being removed somewhere else earlier in the code, I'm already looking for that)

    Read the article

  • Uploadify refuses to upload WMV, FLV and MP4 files - SOLVED

    - by Jon Winstanley
    The uploadify plugin for JQuery seems very good and works for most file types. However, it allows me to upload all file types apart from the ones I need! Namely .WMV, .FLV and .MP4 Uploads of any other type work. I have already tried changing the fileExt parameter and also tried removing it altogether. I have testing in Google Chrome, IE7 and Firefox and none work for these file types. I have a ton of local projects already and uploading is not an issue on any other project, I even use the same example files (This is the first time I have used Uploadify) Is there a known reason for this behaviour? EDIT: Have found the issue. I had forgotten to add my usual .htaccess file to the example project.

    Read the article

  • How do I delete a foreign key in SQLAlchemy?

    - by Travis
    I'm using SQLAlchemy Migrate to keep track of database changes and I'm running into an issue with removing a foreign key. I have two tables, t_new is a new table, and t_exists is an existing table. I need to add t_new, then add a foreign key to t_exists. Then I need to be able to reverse the operation (which is where I'm having trouble). t_new = sa.Table("new", meta.metadata, sa.Column("new_id", sa.types.Integer, primary_key=True) ) t_exists = sa.Table("exists", meta.metadata, sa.Column("exists_id", sa.types.Integer, primary_key=True), sa.Column( "new_id", sa.types.Integer, sa.ForeignKey("new.new_id", onupdate="CASCADE", ondelete="CASCADE"), nullable=False ) ) This works fine: t_new.create() t_exists.c.new_id.create() But this does not: t_exists.c.new_id.drop() t_new.drop() Trying to drop the foreign key column gives an error: 1025, "Error on rename of '.\my_db_name\#sql-1b0_2e6' to '.\my_db_name\exists' (errno: 150)" If I do this with raw SQL, i can remove the foreign key manually then remove the column, but I haven't been able to figure out how to remove the foreign key with SQLAlchemy? How can I remove the foreign key, and then the column?

    Read the article

  • How would I best address this object type heirachy? Some kind of enum heirarchy?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing object heirarchies in an ORM approach (in this instance, using Entity Framework 4). I'm working through some docs on EF4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Rails: Polymorphic User Table a good idea with AuthLogic?

    - by sscirrus
    Hi everyone, I have a system where I need to login three user types: customers, companies, and vendors from one login form on the home page. I have created one User table that works according to AuthLogic's example app at http://github.com/binarylogic/authlogic_example. I have added a field called "User Type" that currently contains either 'Customer', 'Company', or 'Vendor'. Note: each user type contains many disparate fields so I'm not sure if Single Table Inheritance is the best way to go (would welcome corrections if this conclusion is invalid). Is this a polymorphic association where each of the three types is 'tagged' with a User record? How should my models look so I have the right relationships between my User table and my user types Customer, Company, Vendor? Thanks very much!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >