Search Results

Search found 1554 results on 63 pages for 'sas macro'.

Page 56/63 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    Hi, I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • l2tp server always 'sent [CCP ResetReq id=0x3]' when got compressed data request

    - by wilbur
    I have built a xl2tpd/ipsec server on my ubuntu 12.04.3, and I managed to make a l2tp vpn connection to the xl2tpd server from my android phone. The xl2tpd log said xl2tpd[10828]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[10828]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[10828]: setsockopt recvref[22]: Protocol not available xl2tpd[10828]: This binary does not support kernel L2TP. xl2tpd[10828]: xl2tpd version xl2tpd-1.2.8 started on atime.me PID:10828 xl2tpd[10828]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[10828]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[10828]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[10828]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[10828]: Listening on IP address 0.0.0.0, port 1701 xl2tpd[10828]: control_finish: Peer requested tunnel 39154 twice, ignoring second one. xl2tpd[10828]: Connection established to 117.136.8.59, 43149. Local: 25339, Remote: 39154 (ref=0/0). LNS session is 'default' However I cannot access the web in my browser. The pppd log said rcvd [Compressed data] 00 1d 82 c4 7c 04 d8 09 ... sent [CCP ResetReq id=0x7] I have googled a lot and found that this was mostly caused by a mppe decompression error. I have disabled BSD-Compress compression with nobsdcomp in /etc/ppp/xl2tpd-options but it did not work. I used openswan-2.6.33 and xl2tpd-1.2.8 which were built from source. And my configurations: /etc/ipsec.conf version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=106.186.121.214 leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] local ip = 10.10.11.1 ip range = 10.10.11.2-10.10.11.245 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/xl2tpd-options length bit = yes /etc/ppp/xl2tpd-options require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4 debug nobsdcomp Any suggestions? Thanks in advance.

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • QT: trouble with qobject_cast

    - by weevilo
    I have derived QGraphicsItem and QGraphicsScene classes. I want the items to be able to call scene() and get a derviedGraphicsItem * instead of a QGraphicsItem *, so I reimplemented QGraphicsScene::itemAt to return a derived pointer. DerivedItem* DerivedScene::itemAt( const QPointF &position, const QTransform &dt ) const { return qobject_cast< DerivedItem * >( QGraphicsScene::itemAt(position, dt) ); } I get the following error (Qt 4.6, GCC 4.4.3 on Ubuntut 10.4) scene.cpp: In member function ‘DerivedItem* DerivedScene::itemAt(qreal, qreal, const QTransform&) const’: scene.cpp:28: error: no matching function for call to ‘qobject_cast(QGraphicsItem*)’ I then noticed QGraphicsItem doesn't inherit QObject, so I made my derived QGraphicsItem class have multiple inheritance from QObject and QGraphicsItem, and after adding the Q_OBJECT macro and rebuilding the project I get the same error. Am I going about this the wrong way? I know it's supposed to be bad design to try to cast a parent class as a child, but in this case it seems like what I want, since my derived item class has new functionality and its objects need a way to call that new functionality on items around themselves, and asking the items scene object with itemAt() seems like the best way - but I need itemAt() to return a pointer of the right type. I can get around this by having the derived items cast the QGraphicsItem * returned by QGraphicsScene::itemAt() using dynamic_cast, but I don't really understand why that works and not qobject_cast, or the benefits or disadvantages to using dynamic_cast vs. qobject_cast.

    Read the article

  • VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • how many types of code signing certificates do I need?

    - by gerryLowry
    in Canada, website SSL certificates can be had for as low as US$10. unfortunately, code signing certificates cost about 10 time as much, one website mentions Vista compatibility ... this seems strange because my assumption is they must support XP, Vista, Windows 7, Server 2003, and Server 2008 or they would be useless. https://secure.ksoftware.net/code_signing.html US$99 Support Platforms Microsoft Authenticode. Sign any Microsoft executable format (32 and 64 bit EXE, DLL, OCX, DLL or any Active X control). Signing hardware drivers is not currently supported. Abode AIR. Sign any Adobe AIR application. Java. Sign any JAR applet Microsoft Office. Sign any MS Office Macro or VBA (Visual Basic for Applications) file. Mozilla. Sign any Mozilla Object file. The implication is that a single code signing certificate can do ALL of the above. ksoftware actually discounts Commodo certificates and the Commode website is unclear. QUESTION: Will ONE code signing certificate be enough or do I need one for Microsoft executables, and a second for things like Word and Excel macros? my main goal is to sign things like vs2008 code snippets so that I can export them securely; however, I would like to be able to use the same code signing certificate for signing other items too. Thank you ~~ regards, Gerry (Lowry)

    Read the article

  • Easy to use AutoHotkey/AutoIT alternatives for Linux

    - by Xeddy
    Hi all, I'm looking for recommendations for an easy-to-use GUI automation/macro platform for Linux. If you're familiar with AutoHotkey or AutoIT on Windows, then you know exactly the kind of features I need, with the level of complexity. If you aren't familiar, then here's a small code snippet of how easy it is to use AHK: InputBox, varInput, Please enter some random text... Run, notepad.exe WinWaitActive, Untitled - Notepad SendInput, %varInput% SendInput, !f{Up}{Enter}{Enter} WinWaitActive, Save SendInput, SomeRandomFile{Enter} MsgBox, Your text`, %varInput% has been saved using notepad! #n::Run, notepad.exe Now the above example, although a bit pointless, is a demo of the sort of functionality and simplicity I'm looking for. Here's an explanation for those who don't speak AHK: ----Start of Explanation of Code ---- Asks user to input some text and stores it in varInput Runs notepad.exe Waits till window exists and is active Sends the contents of varInput as a series of keystrokes Sends keystrokes to go to File - Exit Waits till the "Save" window is active Sends some more keystrokes Shows a Message Box with some text and the contents of a variable Registers a hotkey, Win+N, which when pressed executes notepad.exe ----End of Explanation---- So as you can understand, the features are quite obvious: Ability to easily simulate keyboard and mouse functions, read input, process and display output, execute programs, manipulate windows, register hotkeys, etc.. all being done without requiring any #includes, unnecessary brackets, class declarations etc. In short: Simple. Now I've played around a bit with Perl and Python, but its definitely no AHK. They're great for more advanced stuff, but surely, there has to be some tool out there for easy GUI automation? PS: I've already tried running AHK with Wine but sending keystrokes and hotkeys don't work.

    Read the article

  • How to break WinDbg in an anonymous method?

    - by Richard Berg
    Title kinda says it all. The usual SOS command !bpmd doesn't do a lot of good without a name. Some ideas I had: dump every method, then use !bpmd -md when you find the corresponding MethodDesc not practical in real world usage, from what I can tell. Even if I wrote a macro to limit the dump to anonymous types/methods, there's no obvious way to tell them apart. use Reflector to dump the MSIL name doesn't help when dealing with dynamic assemblies and/or Reflection.Emit. Visual Studio's inability to read local vars inside such scenarios is the whole reason I turned to Windbg in the first place... set the breakpoint in VS, wait for it to hit, then change to Windbg using the noninvasive trick attempting to detach from VS causes it to hang (along with the app). I think this is due to the fact that the managed debugger is a "soft" debugger via thread injection instead of a standard "hard" debugger. Or maybe it's just a VS bug specific to Silverlight (would hardly be the first I've encountered). set a breakpoint on some other location known to call into the anonymous method, then single-step your way in my backup plan, though I'd rather not resort to it if this Q&A reveals a better way

    Read the article

  • What do you mean by the expressiveness in a programming language?

    - by prosseek
    I see a lot of the word 'expressiveness' when people want to stress one language is better than the other. But I don't see exactly what they mean by it. Is it the verboseness/succinctness? I mean, if one language can write down something shorter than the other, does that mean expressiveness? Please refer to my other question - http://stackoverflow.com/questions/2411772/article-about-code-density-as-a-measure-of-programming-language-power Is it the power of the language? Paul Graham says that one language is more powerful than the other language in a sense that one language can do that the other language can't do (for example, LISP can do something with macro that the other language can't do). Is it just something that makes life easier? Regular expression can be one of the examples. Is it a different way of solving the same problem: something like SQL to solve the search problem? What do you think about the expressiveness of a programming language? Can you show the expressiveness using some code? What's the relationship with the expressiveness and DSL? Do people come up with DSL to get the expressiveness?

    Read the article

  • What is the worst programming language you ever worked with? [closed]

    - by Ludwig Weinzierl
    If you have an interesting story to share, please post an answer, but do not abuse this question for bashing a language. We are programmers, and our primary tool is the programming language we use. While there is a lot of discussion about the best one, I'd like to hear your stories about the worst programming languages you ever worked with and I'd like to know exactly what annoyed you. I'd like to collect this stories partly to avoid common pitfalls while designing a language (especially a DSL) and partly to avoid quirky languages in the future in general. This question is not subjective. If a language supports only single character identifiers (see my own answer) this is bad in a non-debatable way. EDIT Some people have raised concerns that this question attracts trolls. Wading through all your answers made one thing clear. The large majority of answers is appropriate, useful and well written. UPDATE 2009-07-01 19:15 GMT The language overview is now complete, covering 103 different languages from 102 answers. I decided to be lax about what counts as a programming language and included anything reasonable. Thank you David for your comments on this. Here are all programming languages covered so far (alphabetical order, linked with answer, new entries in bold): ABAP, all 20th century languages, all drag and drop languages, all proprietary languages, APF, APL (1), AS400, Authorware, Autohotkey, BancaStar, BASIC, Bourne Shell, Brainfuck, C++, Centura Team Developer, Cobol (1), Cold Fusion, Coldfusion, CRM114, Crystal Syntax, CSS, Dataflex 2.3, DB/c DX, dbase II, DCL, Delphi IDE, Doors DXL, DOS batch (1), Excel Macro language, FileMaker, FOCUS, Forth, FORTRAN, FORTRAN 77, HTML, Illustra web blade, Informix 4th Generation Language, Informix Universal Server web blade, INTERCAL, Java, JavaScript (1), JCL (1), karol, LabTalk, Labview, Lingo, LISP, Logo, LOLCODE, LotusScript, m4, Magic II, Makefiles, MapBasic, MaxScript, Meditech Magic, MEL, mIRC Script, MS Access, MUMPS, Oberon, object extensions to C, Objective-C, OPS5, Oz, Perl (1), PHP, PL/SQL, PowerDynamo, PROGRESS 4GL, prova, PS-FOCUS, Python, Regular Expressions, RPG, RPG II, Scheme, ScriptMaker, sendmail.conf, Smalltalk, Smalltalk , SNOBOL, SpeedScript, Sybase PowerBuilder, Symbian C++, System RPL, TCL, TECO, The Visual Software Environment, Tiny praat, TransCAD, troff, uBasic, VB6 (1), VBScript (1), VDF4, Vimscript, Visual Basic (1), Visual C++, Visual Foxpro, VSE, Webspeed, XSLT The answers covering 80386 assembler, VB6 and VBScript have been removed.

    Read the article

  • Copy Selective Data from Database to Invoice, Based on Certain Criteria

    - by Scott
    For starters, here is an example of a microsoft excel database I am working with: Month/Address/Name/Description/Amount January/123 Street/Fred/Painting/100 January/456 Avenue/Scott/Flooring/400 January/789 Road/Scott/Plumbing/100 February/123 Street/Fred/Flooring/600 February/246 Lane/Fred/Electrical/300 March/789 Road/Scott/Drywall/150 What I want to be able to do is selectively copy info from this databse to invoices (also excel). The invoice has three columns: Address/Description/Amount. I want to be able to automatically fill the invoices in as the database is filled in (either automatically, or if I have to actually manually run the macro to do it, that might be fine). Each name (Scott, Fred, etc.) will have their own set of 12 invoices for the year. So, e.g., I want to be able to produce a January invoice for all work done for Scott in January, showing the address, the description and the amount, line by line. So every time work on Scott's address(es) is done, the database is filled in, and i want it to "send" that information to the invoice on the next available line, filling in only the Address/Description/Amount columns from the database. Fred's invoice should fill in as any work is done on Fred's addresses. And once the month changes, the next invoice should start filling in. So first I need to filter the data by the month and the name (and there is actually one more column to filter by, but let's keep this example simpler). Then I need to list the remaining data on the invoice, but only certain cells from the rows that are now left. Help anyone?

    Read the article

  • FreeRTOS Sleep Mode hazards while using MSP430f5438

    - by michael
    Hi, I wrote an an idle hook shown here void vApplicationIdleHook( void ) { asm("nop"); P1OUT &= ~0x01;//go to sleep lights off! LPM3;// LPM Mode - remove to make debug a little easier... asm("nop"); } That should cause the LED to turn off, and MSP430 to go to sleep when there is nothing to do. I turn the LED on during some tasks. I also made sure to modify the sleep mode bit in the SR upon exit of any interrupt that could possibly wake the MCU (with the exception of the scheduler tick isr in portext.s43. The macro in iar is __bic_SR_register_on_exit(LPM3_bits); // Exit Interrupt as active CPU However, it seems as though putting the MCU to sleep causes some irregular behavior. The led stays on always, although when i scope it, it will turn off for a couple instructions cycles when ever i wake the mcu via one of the interrupts (UART), and then turn back on. If I comment out the LPM3 instruction, things go as planned. The led stays off for most of the time and only comes on when a task is running. I am using a MSP4f305438 Any ideas?

    Read the article

  • Problem compiling gnustep-gui-0.16.0 undefined reference to png_sizeof

    - by stefanB
    I'm trying to compile GNUstep on a linux box but gnustep-gui-0.16.0 package is failing. I downloaded GNUstep Startup stable 0.20.1 (http://wwwmain.gnustep.org/resources/downloads.php)and follow instructions about how to compile (./configure && make). I'm getting this error: libgnustep-gui.so: undefined reference to 'png_sizeof' I have compiled latest libpng (1.2.34) and I can see that png_sizeof is defined as macro. However, I'm not quite sure how to fix the gnustep-gui-0.16.0 build. I tried to pass the include/lib directory where libpng is installed to configure build but nothing seems to help. I have quite up to date linux box but using gcc 3.3 (upgrade is not an option - but this should not be a problem). Full error: Making all for tool set_show_service... Compiling file set_show_service.m ... Linking tool set_show_service ... ../Source/./obj/libgnustep-gui.so: undefined reference to `png_sizeof' collect2: ld returned 1 exit status gmake[3]: *** [obj/set_show_service] Error 1 gmake[2]: *** [set_show_service.all.tool.variables] Error 2 gmake[1]: *** [internal-all] Error 2 gmake[1]: Leaving directory `/home/bla/local/src/gnustep-startup-0.22.0/build/gnustep-gui-0.16.0' gmake[3]: *** [obj/set_show_service] Error 1 gmake[2]: *** [set_show_service.all.tool.variables] Error 2 gmake[1]: *** [internal-all] Error 2 Any suggestions? Thanks

    Read the article

  • Create Jinja2 macros that put content in separate places

    - by Brian M. Hunt
    I want to create a table of contents and endnotes in a Jinja2 template. How can one accomplish these tasks? For example, I want to have a template as follows: {% block toc %} {# ... the ToC goes here ... #} {% endblock %} {% include "some other file with content.jnj" %} {% block endnotes %} {# ... the endnotes go here ... #} {% endblock %} Where the some other file with content.jnj has content like this: {% section "One" %} Title information for Section One (may be quite long); goes in Table of Contents ... Content of section One {% section "Two" %} Title information of Section Two (also may be quite long) <a href="#" id="en1">EndNote 1</a> <script type="text/javsacript">...(may be reasonably long) </script> {# ... Everything up to here is included in the EndNote #} Where I say "may be quite/reasonably long" I mean to say that it can't reasonably be put into quotes as an argument to a macro or global function. I'm wondering if there's a pattern for this that may accommodate this, within the framework of Jinja2. My initial thought is to create an extension, so that one can have a block for sections and end-notes, like-so: {% section "One" %} Title information goes here. {% endsection %} {% endnote "one" %} <a href="#">...</a> <script> ... </script> {% endendnote %} Then have global functions (that pass in the Jinja2 Environment): {{ table_of_contents() }} {% include ... %} {{ endnotes() }} However, while this will work for endnotes, I'd presume it requires a second pass by something for the table of contents. Thank you for reading. I'd be much obliged for your thoughts and input. Brian

    Read the article

  • import txt files using excel interop in C# (QueryTables.Add)

    - by kite
    Hi all, I am trying to insert text files into excel cell using Querytables.Add; no error, but the worksheet is empty. except for the single cell manipulation using Value2 property. I already using macro to record the object used. Can you help me on this(I am using vs2008, C# , excel 2003 and 2007; both shown empty cell). Below is my code; thanks for your help Application application = new ApplicationClass(); try { object misValue = Missing.Value; wbDoc = application.Workbooks.Open(flnmDoc, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue, misValue); wsRefDocBudgetOwner = (Worksheet)wbDoc.Worksheets[2]; Range lRange = wsRefDocBudgetOwner.get_Range("B2", "B25"); var temp2 = wsRefDocBudgetOwner.QueryTables; var temp = temp2.Add(@"TEXT;d:\temp\config ssas.txt", lRange, Type.Missing); //temp.RefreshStyle = XlCellInsertionMode.xlInsertDeleteCells; //temp.RefreshOnFileOpen = true; wsRefDocBudgetOwner.get_Range("B1", "B1").Value2 = "Lgfdgast adsffdafadfads"; wbDoc.Save(); //wbDoc.SaveAs(flnmDoc2, misValue, misValue, misValue, misValue, misValue, XlSaveAsAccessMode.xlExclusive, // misValue, misValue, misValue, misValue, misValue); wbDoc.Close(Missing.Value, Missing.Value, Missing.Value); } finally { application.Quit(); }

    Read the article

  • Linux Kernel - Red/Black Trees

    - by CodeRanger
    I'm trying to implement a red/black tree in Linux per task_struct using code from linux/rbtree.h. I can get a red/black tree inserting properly in a standalone space in the kernel such as a module but when I try to get the same code to function with the rb_root declared in either task_struct or task_struct-files_struct, I get a SEGFAULT everytime I try an insert. Here's some code: In task_struct I create a rb_root struct for my tree (not a pointer). In init_task.h, macro INIT_TASK(tsk), I set this equal to RB_ROOT. To do an insert, I use this code: rb_insert(&(current-fd_tree), &rbnode); This is where the issue occurs. My insert command is the standard insert that is documented in all RBTree documentation for the kernel: int my_insert(struct rb_root *root, struct mytype *data) { struct rb_node **new = &(root->rb_node), *parent = NULL; /* Figure out where to put new node */ while (*new) { struct mytype *this = container_of(*new, struct mytype, node); int result = strcmp(data->keystring, this->keystring); parent = *new; if (result < 0) new = &((*new)->rb_left); else if (result > 0) new = &((*new)->rb_right); else return FALSE; } /* Add new node and rebalance tree. */ rb_link_node(&data->node, parent, new); rb_insert_color(&data->node, root); return TRUE; } Is there something I'm missing? Some reason this would work fine if I made a tree root outside of task_struct? If I make rb_root inside of a module this insert works fine. But once I put the actual tree root in the task_struct or even in the task_struct-files_struct, I get a SEGFAULT. Can a root node not be added in these structs? Any tips are greatly appreciated. I've tried nearly everything I can think of.

    Read the article

  • Using Sendkeys in python to press {F12} results in other keys pressed?

    - by ThantiK
    import time from ctypes import * import win32gui import win32com.client as comclt X = 119 Y = 53 def PILColorToRGB(pil_color): """ convert a PIL-compatible integer into an (r, g, b) tuple """ hexstr = '%06x' % pil_color # reverse byte order r, g, b = hexstr[4:], hexstr[2:4], hexstr[:2] r, g, b = [int(n, 16) for n in (r, g, b)] return (r, g, b) wsh = comclt.Dispatch("WScript.Shell") w = win32gui user = windll.LoadLibrary("c:\\windows\\system32\\user32.dll") h = user.GetDC(0) gdi = windll.LoadLibrary("c:\\windows\\system32\\gdi32.dll") while True: FG = w.GetWindowText(w.GetForegroundWindow()) #FG = Foreground window title. if FG == "World of Warcraft": rgb = (PILColorToRGB(gdi.GetPixel(h,X,Y))) #X, Y time.sleep(0.333) #don't check too often. if (rgb[0] >= 130): #While Pixel (X, Y) is Red... #print "%d %d %d" % (rgb[0], rgb[1], rgb[2]) #Debug wsh.SendKeys("{F12}") #Send a key. time.sleep(0.7) #Add some extra down-time if we send the key. else: time.sleep(5) Basically all this code does is read a pixel on the screen, and send a key (F12) if the pixel is red. But when using this code I regularly get some phantom key-code being pressed. The application I'm using this on is obviously world of warcraft, and I have checked that all keybinds are standard keybinds. However randomly it seems I get either an up arrow, or a w pressed, which moves my character forward whenever this code executes (F12 is bound to a macro, unbound from any movement. If I press f12 with a hardware event, it does not exhibit this behavior. What in the world could be going on here?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >