Search Results

Search found 10076 results on 404 pages for 'high volume'.

Page 287/404 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Hiding flash component scrollbars using object/param syntax

    - by Kieran Benton
    Hi all, I'm not sure if this is possible (complete non-flash developer speaking), but we have a 3rd party component that we want to only show a certain topleft hand portion of. I've tried limiting the size of the HTML object container as: <object type="application/x-shockwave-flash" width="600" height="415" data="<url>"> <param name="movie" value="<url>" /> <param name="wmode" value="transparent" /> <param name="allowscriptaccess" value="always" /> <param name="quality" value="high" /> <param name="flashvars" value="<vars>" /> </object> So limiting it to 600x415, but this causes horizontal and vertical scrollbars as part of the flash component to appear. Is there any standard way to override this behaviour? Thanks.

    Read the article

  • Low Power Speed Monitoring

    - by user555584
    I am aware of speed detection via gps, however as a background app, I am concerned about high power drain. I am looking to detect speed, say over 5mph, but it does not have to be accurate, say like a speedometer. Is there a low power way to detect if the phone is in motion, say by triangulation, or tracking tower strength and new/recently lost towers? I have an app design that is dependent on running in the background upon launch and knowing if the phone is in a car or not. Thanks!

    Read the article

  • Is there ever a reason to use Goto in modern .NET code?

    - by BenAlabaster
    I just found this code in reflector in the .NET base libraries... if (this._PasswordStrengthRegularExpression != null) { this._PasswordStrengthRegularExpression = this._PasswordStrengthRegularExpression.Trim(); if (this._PasswordStrengthRegularExpression.Length == 0) { goto Label_016C; } try { new Regex(this._PasswordStrengthRegularExpression); goto Label_016C; } catch (ArgumentException exception) { throw new ProviderException(exception.Message, exception); } } this._PasswordStrengthRegularExpression = string.Empty; Label_016C: ... //Other stuff I've heard all of the "thou shalt not use goto on fear of exile to hell for eternity" spiel. I always held MS coders in fairly high regard and while I may not have agreed with all of their decisions, I always respected their reasoning. So - is there a good reason for code like this that I'm missing, or was this code extract just put together by a shitty developer? I'm hoping there is a good reason, and I'm just blindly missing it. Thanks for everyone's input

    Read the article

  • What the performance impact of enabling WebSphere PMI

    - by Andrew Whitehouse
    I am currently looking at some JProfiler traces from our WebSphere-based application, and am noticing that a significant amount of CPU time is being spent in the class com.ibm.io.async.AsyncLibrary.getCompletionData2. I am guessing, but I am wondering whether this is PMI-related (and we do have this enabled). My knowledge of PMI is limited, as this is managed by another team. Is it expected that PMI can have this sort of impact? (If so) Is the only option to turn it off completely? Or are there some types of data capture that have a particularly high overhead?

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Implementing a linear, binary SVM (support vector machine)

    - by static_rtti
    I want to implement a simple SVM classifier, in the case of high-dimensional binary data (text), for which I think a simple linear SVM is best. The reason for implementing it myself is basically that I want to learn how it works, so using a library is not what I want. The problem is that most tutorials go up to an equation that can be solved as a "quadratic problem", but they never show an actual algorithm! So could you point me either to a very simple implementation I could study, or (better) to a tutorial that goes all the way to the implementation details? Thanks a lot!

    Read the article

  • Can a CouchDB document update handler get an update conflict?

    - by jhs
    How likely is a revision conflict when using an update handler? Should I concern myself with conflict-handling code when writing a robust update function? As described in Document Update Handlers, CouchDB 0.10 and later allows on-demand server-side document modification. Update handlers can process non-JSON formats; but the other major features are these: An HTTP front-end to arbitrarily complex document modification code Similar code needn't be written for all possible clients—a DRY architecture Execution is faster and less likely to hit a revision conflict I am unclear about the third point. Executing locally, the update handler will run much faster and with lower latency. But in situations with high contention, that does not guarantee a successful update. Or does the update handler guarantee a successful update?

    Read the article

  • WAN Optimization Resources

    - by Paul
    I'm looking for resources on writing software to do WAN optimization. I've googled this and searched SO, but there doesn't seem to be much around. The only things I've found are high-level articles in tech magazines, and info for network admins on how to make use of existing WAN optimization products. I'm looking for something on the techniques etc. used to write WAN optimization software. It seems to be a dark art, and the people who know how to do it, guard their secrets closely. Any suggestions?

    Read the article

  • How to recover gracefully from a C# udp socket exception

    - by Gearoid Murphy
    Context: I'm porting a linux perl app to C#, the server listens on a udp port and maintains multiple concurrent dialogs with remote clients via a single udp socket. During testing, I send out high volumes of packets to the udp server, randomly restarting the clients to observe the server registering the new connections. The problem is this: when I kill a udp client, there may still be data on the server destined for that client. When the server tries to send this data, it gets an icmp "no service available" message back and consequently an exception occurs on the socket. I cannot reuse this socket, when I try to associate a C# async handler with the socket, it complains about the exception, so I have to close and reopen the udp socket on the server port. Is this the only way around this problem?, surely there's some way of "fixing" the udp socket, as technically, UDP sockets shouldn't be aware of the status of a remote socket? Any help or pointers would be much appreciated. Thanks.

    Read the article

  • Fastest sort of fixed length 6 int array

    - by kriss
    Answering to another StackOverflow question (this one) I stumbled upon an interresting sub-problem. What is the fastest way to sort an array of 6 ints ? As the question is very low level (will be executed by a GPU): we can't assume libraries are available (and the call itself has it's cost), only plain C to avoid emptying instruction pipeline (that has a very high cost) we should probably minimize branches, jumps, and every other kind of control flow breaking (like those hidden behind sequence points in && or ||). room is constrained and minimizing registers and memory use is an issue, ideally in place sort is probably best. Really this question is a kind of Golf where the goal is not to minimize source length but execution speed. I call it 'Zening` code as used in the title of the book Zen of Code optimization by Michael Abrash and it's sequels.

    Read the article

  • Troubles configuring SSL for an Apache host

    - by Ryan
    I configured it on Friday night and all worked well. Today for some reason it stopped working and I can't figure out why. When you goto the secure page it's acting like I have a self-signed certificate and I don't. I have the host configured like so ServerAdmin [email protected] DocumentRoot "/path/to/site" Servername www.mydomain.com ServerAlias mydomain.com DirectoryIndex index.cfm index.htm SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/httpd/path/to/mydomain.com.crt SSLCertificateKeyFile /etc/httpd/path/to/www.mydomain.com.key SSLCertificateChainFile /etc/httpd/path/to/gd_bundle.crt Apache starts with no errors and I can't seem to find anything meaningful in any of the logs. It's got to be something minor but I can't seem to see it. It's an updated Centos/Apache VM. I have worn Google out.

    Read the article

  • prevent filesystem from entering read-only mode

    - by user788171
    I have found that my server's filesystem is continuously entering read-only mode. There have been some issues with the raid1 array, but I have removed the bad disk from the array. However, it is still physically plugged into the system because I haven't had a chance to go over to the datacentre, I suspect udev and the system kernel is still picking up the bad disk and throwing errors. In /var/log/messages, there are errors like this: Mar 2 06:53:14 nocloud kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xe frozen Mar 2 06:53:14 nocloud kernel: ata1: irq_stat 0x00400040, connection status changed Mar 2 06:53:14 nocloud kernel: ata1: SError: { PHYRdyChg DevExch } Mar 2 06:53:14 nocloud kernel: ata1: hard resetting link Mar 2 06:53:20 nocloud kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 2 06:53:21 nocloud kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 2 06:53:21 nocloud kernel: ata1.00: configured for UDMA/133 Mar 2 06:53:21 nocloud kernel: ata1: EH complete This happens fairly randomly throughout the day until eventually the filesystem becomes read-only. When this happens, my system becomes non-operational which kind of defeats the purpose of having a raid1. Note, ata1 is the bad disk (I think ata1 corresponds to /dev/sda because they are both first in line). Under mdadm, /dev/sda1,2 is no longer being used, but I can't prevent the system kernel from continuing to query that disk when I am no longer using it and throwing these errors. Is there a way to prevent my filesystem from automatically going into read-only mode? Furthermore, is it safe to do so? Thanks in advance. EDIT: Additional information: output from cat /proc/mdstat md1 : active raid1 sdb2[1] 976554876 blocks super 1.1 [2/1] [_U] bitmap: 5/8 pages [20KB], 65536KB chunk md0 : active raid1 sdb1[1] 204788 blocks super 1.0 [2/1] [_U] Output from mount: /dev/mapper/VolGroup-LogVol00 on / type ext4 (rw,noatime) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/md0 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) EDIT2: pvdisplay output: --- Physical volume --- PV Name /dev/md1 VG Name VolGroup PV Size 931.32 GiB / not usable 2.87 MiB Allocatable yes (but full) PE Size 16.00 MiB Total PE 59604 Free PE 0 Allocated PE 59604

    Read the article

  • How to resolve this very heavy query that slows down the application?

    - by Juan Paredes
    Hi, We have a web application running in a production enviroment and at some point the client complained about how slow the application got. When we checked what was going on with the application and the database we discover this "precious" query that was being executed by several users at the same time (thus inflicting a extremely high load on the database server): SELECT NULL AS table_cat, o.owner AS table_schem, o.object_name AS table_name, o.object_type AS table_type, NULL AS remarks FROM all_objects o WHERE o.owner LIKE :1 ESCAPE :"SYS_B_0" AND o.object_name LIKE :2 ESCAPE :"SYS_B_1" AND o.object_type IN(:"SYS_B_2", :"SYS_B_3") ORDER BY table_type, table_schem, table_name Our application does not execute this query, I believe it is an Hibernate internal query. I've found little information on why Hibernate does this extremely heavy query, so any help is very much appreciated! The production enviroment information: Red Hat Enterprise Linux 5.3 (Tikanga), JDK 1.5, web container OC4J (whitin Oracle Application Server), Oracle Database 10g Release 10.0.0.1, JDBC Driver for JDK 1.2 and 1.3, Hibernate version 3.2.6.ga. Thank you.

    Read the article

  • Indexing only one MySQL column value

    - by BrainCore
    I have a MySQL InnoDB table with a status column. The status can be 'done' or 'processing'. As the table grows, at most .1% of the status values will be 'processing,' whereas the other 99.9% of the values will be 'done.' This seems like a great candidate for an index due to the high selectivity for 'processing' (though not for 'done'). Is it possible to create an index for the status column that only indexes the value 'processing'? I do not want the index to waste an enormous amount of space indexing 'done.'

    Read the article

  • Suggest Cassandra data model for an existing schema

    - by Andriy Bohdan
    Hello guys! I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.

    Read the article

  • Only show activities from a specific gadget in OpenSocial

    - by corné
    I created a gadget which is able to post activities: function postActivity(text) { var params = {}; params[opensocial.Activity.Field.TITLE] = text; var activity = opensocial.newActivity(params); opensocial.requestCreateActivity(activity, opensocial.CreateActivityPriority.HIGH); }; And I request activities by doing: req.add(req.newFetchActivitiesRequest( new opensocial.IdSpec({'userId' : 'VIEWER', 'groupId' : 'FRIENDS'})), 'friends_activities'); My question is: How can I filter activities to only show activities posted by a specific gadget? I reckon the field "appId" is meant for this, but this field is empty in the response.

    Read the article

  • geographical deployment Vs geo load balancing SharePoint 2010

    - by vrajaraman
    we have a company wide SharePoint portals planned for few thousand users. since the users are distributed among different countries and their applications (hosted in sharepoint) We would like to consider geo deployment Vs geo load balancing. Please share your inputs. We are aware of this, Geo SharePoint Cluster facilitates - Farms at Central and other sites , db into regional. 2 db cluster - syncing using logshipping or SAN sync or SQL 2008 features like database mirroing Vs Loading balancing using URL and some 3rd party. all farm,sites,db centralised. benefits expecting. 1 High availability. 2.diaster recovering management. 3.maintenance hope i miss some of the points to be covered

    Read the article

  • Quickest and easiest way to implement speech to text conversion for a small speech subset.

    - by sgtpeppers
    Hi, I want to implement a system that receives speech through a microphone on my Mac OS x. I know arbitrary speech recognition is close to impossible without training the system so I'm willing to restrict it to 10 simple sentences. It must recognize with a high degree of accuracy which of these 10 sentences are being spoken, generate the text and add an entry to a remote MySQL database. With these being the architecture of the system I want to implement, could anyone give me an overview of what would be the best way to go about implementing this system? I'm looking for ideas like open source libraries to minimize the coding as this is just a prototype application for a demonstration. Basically I'm looking for a quick and easy solution. Thanks!

    Read the article

  • Strategies to use Database Sequences?

    - by Bruno Brant
    Hello all, I have a high-end architecture which receives many requests every second (in fact, it can receive many requests every millisecond). The architecture is designed so that some controls rely on a certain unique id assigned to each request. To create such UID we use a DB2 Sequence. Right now I already understand that this approach is flawed, since using the database is costly, but it makes sense to do so because this value will also be used to log information on the database. My team has just found out an increase of almost 1000% in elapsed time for each transaction, which we are assuming happened because of the sequence. Now I wonder, using sequences will serialize access to my application? Since they have to guarantee that increments works the way they should, they have to, right? So, are there better strategies when using sequences? Please assume that I have no other way of obtaining a unique id other than relying on the database.

    Read the article

  • w3wp.exe in ASP.NET production app is using 100% CPU. How to find the problem ?

    - by Tucker
    Hi, we have an asp.net app in production where w3wp.exe is taking 100% CPU ( 4 cores - 4 threads at 25% ) and cpu load never goes down until we recycle the application pool ( the app is alone in the application pool ). Our error log has nothing, there is no exceptions being emitted ( or at least we don't catch them ) so we suspect it's code problem ( infinite loop / deadlock ). The problem only arises after some hours running with high load ( several thousand users ). There is any way to profile one of the EXISTING threads that is causing cpu load ? After taking a look to JetBrains's DotTrace Profiler seems like it's not possible for limitations of Profiling API and man, we haven't get to reproduce the problem in our testing environment. The app uses SQL Server 2005, LINQ2SQL and System.Transactions API. Any suggestion to find the problem ?

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Why darcs instead of git?

    - by Ctrl Alt D-1337
    Using pure functional languages can have a lot of benefits over using impure imperatives but low level systems languages will generally allow you to achieve much greater performance especially when they are imperative because it allows you to specify the exact steps in how the cpu should compute the result. If there is ever list of tools where high performance is an absolute must then I would put source version controls systems right at the top of that list and git achieves this very well but performance is not it's only advantage over many other other types of version control systems anyway. The git team are handling the unsafe c code very well and I never worry about my type system or any other features of the language it is written in so why is it that there is a lot of haskell developers that must use darcs when they will only be using the finished product?

    Read the article

  • Windows XP machine not seeing external FAT32 partitions correctly

    - by Rob_before_edits
    About 8 months ago my Windows XP machine stopped being able to see FAT32 external drives when I plug them in... mostly. I will explain... It happens with all my FAT32 drives, whether they be unpowered external hard drives, powered external hard drives, SDHC cards plugged directly into the machine's card reader, or SDHC cards plugged in via a separate USB card reader. All of these drives/cards used to work fine on this machine. They all stopped working at about the same time. NTFS volumes are not affected. If I plug in NTFS external drives they are recognized right away. I even have one external drive with two partitions on it, one is NTFS which is recognized, the other is FAT32, which is not recognized. If I attach a FAT32 drive, then reboot, then the drive almost always becomes visible to the machine after the reboot. Sometimes I can plug in a FAT32 drive and it works right away. Not often though. I'd say I get lucky more often with SDHC cards than hard drives. I'm developing a theory that I only get lucky with hard drives if I'm running Acronis Disk Director when I plug them in, though that usually doesn't work either - I need more data here, this may be a red herring. Getting lucky with a hard drive is really rare, usually I have to reboot. When a FAT32 is recognized, either because I got lucky or because I rebooted, I can almost never safely disconnect it. It tells me "The device 'Generic volume' cannot be stopped right now. Try stopping the device again later". I can't seem to get around this. IIRC, I've tried closing every open window, and still no luck. Since I care about my data usually the only way to disconnect a FAT32 drive is to shut down the machine. As you can imagine, two reboots just to read a drive is getting pretty old... When the machine fails to see a FAT32 drive it usually comes up with the appropriate drive letter and the words "Local Disk" in Windows Explorer instead of the correct partition name. If I click on it I get "J:\ is not accessible. The parameter is incorrect." Before this problem arose I always clicked the "safely remove" button for everything, including SDHC cards where I think it's not necessary. I've known for a long time that this is the correct procedure for hard drives, so I don't think failing to do this was the cause of this problem (before someone asks :) Any answers or suggestions most welcome.

    Read the article

  • JD Edwards... call C#?

    - by rbobby
    Hey all, I know very little about JD Edwards. I have a client asking how to call an API we supply (as COM, C#, REST) from JD Edwards. I'm not getting much in terms of high quality answers from their tech guy... so I thought I'd ask here. Can JD Edwards call C#? Can JD Edwards call Java? Can JD Edwards call a Unix scipt? Can anyone point me towards anything useful in terms of developer/customization documentation? Thanks!

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >