Search Results

Search found 3325 results on 133 pages for 'export'.

Page 33/133 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Too many open files in one of my java routine.

    - by Irfan Zulfiqar
    I have a multithreaded code that has to generated a set of objects and write them to a file. When I run it I sometime get "Too many open files" message in Exception. I have checked the code to make sure that all the file streams are being closed properly. Here is the stack trace. When I do ulimit -a, open files allowed is set to 1024. We think increasing this number is not a viable option / solution. [java] java.io.FileNotFoundException: /export/event_1_0.dtd (Too many open files) [java] at java.io.FileInputStream.open(Native Method) [java] at java.io.FileInputStream.<init>(FileInputStream.java:106) [java] at java.io.FileInputStream.<init>(FileInputStream.java:66) [java] at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) [java] at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) [java] at java.net.URL.openStream(URL.java:1010) Now what we have identified so far by looking closely at the list of open files is that the VM is opening same class file multiple times. /export/BaseEvent.class 236 /export/EventType1BaseEvent.class 60 /export/EventType2BaseEvent.class 48 /export/EventType2.class 30 /export/EventType1.class 14 Where BaseEvent is partent of all the classes and EventType1 ant EventType2 inherits EventType1BaseEvent and EventType2BaseEvent respectively. Why would a class loader load the same class file 200+ times. It seems it is opening up the base class as many time it create any child instance. Is this normal? Can it be handler any other way apart from increasing the number of open files?

    Read the article

  • CURL request incomplete, suspect timeout but not sure.

    - by girlygeek
    I am currently using CURL via a php script running as daily cron to export product data in csv format from a site's admin area. The normal way of exporting data will be to go to the Export page in a browser, and set the configuration, then click on "export data" button. But as the number of products I am exporting is very large, and it takes more than 5-10 mins to export the data, I've decided to use php's curl function to mimic this on a daily basis via cron. Previously, it is working fine, but recently as I increased the number of products in the store by 500+, the script fails to return the exported data. Testing it manually via clicking on the "export" button in a browser, does return the data correctly. Thus there is no "timeout" issue with running the export in a browser manually. I've tested and by removing/decreasing the number of products (thus the time needed), the php-curl script works fine again when run from cron. So I suspect that it has something to do with timeouts issue, specifically with the curl function in php. I've set both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT to '0' respectively to try. In the php-curl script, I've also set "set_time_limit(3000)". But still it does not work, and the request will timeout, with the script failing to return with a complete set of csv data. Any help in helping me resolve/understand this issue will be much appreciated!

    Read the article

  • How to create an NFS proxy by using kernel server & client?

    - by Martin C. Martin
    I have a file server that exports as NFS. On an Ubuntu machine I mount that, then try to export it as an NFS volume. When I go to export it, I get the message: exportfs: /test/nfs-mount-point does not support NFS export How can I get this to work, or at least get more information as to what the problem is? Exact steps: Unbuntu 12.04 mount -f nfs myfileserver.com:/server-dir /test/nfs-mount-point [Works fine, I can read & write files] /etc/exports contains: /test/nfs-mount-point *(rw,no_subtree_check) sudo /etc/init.d/nfs-kernel-server restart Stopping NFS kernel daemon [ OK ] Unexporting directories for NFS kernel daemon... [ OK ] Exporting directories for NFS kernel daemon... exportfs: /test/nfs-mount-point does not support NFS export [ OK ] Starting NFS kernel daemon [ OK ]

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • Mac OS X & Linux: mount_nfs: can't access /nfs: Permission denied

    - by MountainX
    I have an Ubuntu 12.04 NFS server and I have an iMac NFS client running OS X 10.6.8. I believe I have everything set up properly, yet I still get this error on the Mac: mount_nfs: can't access /nfs: Permission denied My exports on the Linux server uses the insecure option like this: /export/home/me/ 192.168.100.132(rw,subtree_check,insecure,nohide) Where 192.168.100.132 is the address of my Mac. I have even tried using -o resvport on the Mac (in addition to insecure on Linux) and I still get the same error as above. $ sudo mount -t nfs -o resvport 192.168.100.1:/home/me /Users/me/mount Here is the output of showmount: # showmount -e 192.168.100.1 Export list for 192.168.100.1: /export/home/me 192.168.100.132 .... I have reviewed this similar question: How to mount NFS export on Mac OS X? And I have reviewed this frequently recommended tutorial: http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial/ I still can't find a solution. Any ideas?

    Read the article

  • Set up proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP, OPENVPN, and PPTP. I want to set up a proxy on the server, so all connection that comes from vpn clients, they will use that. I created the following bash script file for it, but the proxy isn't working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 What to do to make a global proxy for server and all vpn clients to use it automatically?

    Read the article

  • Editing .bash_profile file not taking effect

    - by Sandeepan Nath
    I need to put export PATH=$PATH:/opt/lampp/bin to my ~/.bash_profile file so that mysql from command line works on my system. Please check mysql command line not working for further details on that. I am working on a fedora system and logged in as root user. If I run locate .bash_profile then I get these:- /etc/skel/.bash_profile /home/sam/.bash_profile /home/sohil/.bash_profile /home/windows/.bash_profile /root/.bash_profile So, I modified the /root/.bash_profile file like this:- from PATH=$PATH:$HOME/bin export PATH to PATH=$PATH:/opt/lampp/bin export PATH But, still the change is not taking effect - Opening a new console and running mysql again says bash: mysql: command not found. However running export PATH=$PATH:/opt/lampp/bin in console makes it work for that session. So, I am doing something wrong with the .bash_profile file. May be editing incorrect one or doing the edit incorrectly.

    Read the article

  • set proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP , OPENVPN , PPTP. i want to set proxy in the server so all connection that comes from vpn clients use the proxy that i set in my server. I made a bash script file for it , but proxy not working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 Now , i dont know what to do to make a global proxy for server and all vpn clients use it automatically.

    Read the article

  • Java Class path problem in cent os..

    - by Ramesh
    I have installed java in centos5 classpath is not working .... my bash_profile export PATH=$PATH:/zzz/jdk1.6.0_03/bin/ export JAVA_HOME=/zzz/jdk1.6.0_03/bin/java/ export CLASSPATH=$CLASSPATH:/zzz/aa/mysql.jar:. java -version java version "1.6.0" OpenJDK Runtime Environment (build 1.6.0-b09) OpenJDK Server VM (build 1.6.0-b09, mixed mode)

    Read the article

  • Copy one db diagram from one db to another on different servers? (Same db)

    - by sah302
    I used the copy database wizard to copy my database from our test server to our production server, the database copied everything fine except for the diagram. Okay no problem, first I make sure the target database on production has the support objects created to use database diagraming. Then I select to import data from the other database and chose the dbo.sysdiagrams.Go through with the rest of the import data wizard, but then I get the following error: Validating (Error) Messages Error 0xc0202049: Data Flow Task: Failure inserting into the read-only column "diagram_id". (SQL Server Import and Export Wizard) Error 0xc0202045: Data Flow Task: Column metadata validation failed. (SQL Server Import and Export Wizard) Error 0xc004706b: Data Flow Task: "component "Destination - sysdiagrams" (31)" failed validation and returned validation status "VS_ISBROKEN". (SQL Server Import and Export Wizard) Error 0xc004700c: Data Flow Task: One or more component failed validation. (SQL Server Import and Export Wizard) Error 0xc0024107: Data Flow Task: There were errors during task validation. (SQL Server Import and Export Wizard) So apparently it didn't like that. What's the problem? I am pretty beginner in SQL Server and only do stuff via the GUI usually so am not sure what to do at this point. The databases are the same, but on different servers. Thanks!

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • exported variable not persisted after script execution

    - by Daniele
    I'm facing a wierd issue. I've a vm with solaris 11, and trying to write some bash scripts. if, on the shell, I type : export TEST=aaa and subsequently run: set I correctly see a new environment variable named TEST whose value is aaa. If, however I do basically the same thing in a script. when the script terminates, I do not see the variable set. To make a concrete example, if in a file test.sh I have: #!/usr/bin/bash echo 1: $TEST #variable not defined yet, expect to print only 1: echo 2: $USER TEST=sss echo 3: $TEST export TEST echo 4: $TEST it prints: 1: 2: daniele 3: sss 4: sss and after its execution, TEST is not set in the shell. Am I missing something? I tried both to do export TEST=sss and the separate variable set/export with no difference.

    Read the article

  • Exported csv file is not in right lining with pgadmin

    - by user938363
    We exported a pg 9.3 table to csv file in pgadmin. The problem is that from about 10th line, the lining of the columns were messed up and did not line up correctly with the columns above. We tried a few times and every output has the same problem. We follow the instruction on http://www.question-defense.com/2010/10/15/how-to-export-from-pgadmin-export-pgadmin-data-to-csv for export. The only difference that UTF8 is selected instead of localcharset. What's the right way to export csv in pg?

    Read the article

  • SQL Server Logs: missing date ranges

    - by Jeff
    I need to be able to export SQL Server logs into CSV files, which I can easily do with the export function. However in doing so I've noticed there's a range of dates missing from the SQL Server logs in Management Studio, two months actually. I'm wondering where these logs might be found, and if it's possible to reload them so I can view and then export them. They're strictly for informational purposes, but I do need them. Thanks in advance!

    Read the article

  • How do I add a second disk to my zfs root pool

    - by ankimal
    I am trying to add a new disk to my zfs root pool. Here is my current config: zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 errors: No known data errors bash-3.00# df -h Filesystem Size Used Avail Use% Mounted on rpool/ROOT/s10x_u7wos_08 311G 18G 293G 6% / swap 14G 384K 14G 1% /etc/svc/volatile /usr/lib/libc/libc_hwcap1.so.1 311G 18G 293G 6% /lib/libc.so.1 swap 14G 52K 14G 1% /tmp swap 14G 40K 14G 1% /var/run rpool/export 293G 19K 293G 1% /export rpool/export/home 430G 138G 293G 32% /export/home rpool 293G 36K 293G 1% /rpool # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 1. c2d0 <Hitachi- JK1181YAHL0YK-0001-16777216.> /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0 Disk 1 above is the new disk I need to attach to expand my root pool (give /export/home some extra space). If I try to attach my new disk to the pool # zpool attach -f rpool c0d0s0 c2d0s0 cannot attach c2d0s0 to c0d0s0: new device must be a single disk # uname -a SunOS dsol1 5.10 Generic_139556-08 i86pc i386 i86pc Solaris Any ideas?

    Read the article

  • Rsyslog : copy with change the facility

    - by Dom
    I have saslauthd with save the logs in LOG_AUTH in our rsyslogd server. It can't be changed without recompiling, and I don't want to do that. I would like to see all the LOG_AUTH in LOG_MAIL, because I do an export to an external machine, and I would like to see all the saslauthd logs in LOG_MAIL in the distant server. Of course, in local I can add "auth.* " in the mail.log file section, but the export will not be in the right file because I filter in export by syslog Facility/Priority. How can I export all the AUTH logs into MAIL logs ? Thanks

    Read the article

  • Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

    - by Giri Mandalika
    Symptom: A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following: Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (Data length (32) Column definition (3)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ). GenericLog GenericError 1 0002157.. 11-11-18 13:28 Message: Generated SQL statement:, Additional Message: SQLFetch: SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY, 'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH, RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?) AND DOBJ.ROW_ID = RDOBJ.DOCK_ID AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL) AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL) Message: Error: An ODBC error occurred, Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ). Message: GEN-10, Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects Message: GEN-10, Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo GenericError (srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl SmiLayerLog Error Terminate process due to unrecoverable error: 3006. (Main Thread) An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause. Solution: Stop the application server and manually kill the remaining Siebel application specific processes eg., stop_server all pkill siebmtsh pkill siebproc .. Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup Start the application server start_server all

    Read the article

  • MEF, IServiceProvider and Testing Visual Studio Extensions

    - by Daniel Cazzulino
    In the latest and greatest version of Visual Studio, MEF plays a critical role, one that makes extending VS much more fun than it ever was. So typically, you just [Export] something, and then someone [Import]s it and that's it. MEF in all its glory kicks in and gets all your dependencies satisfied. Cool, you say, so let's now import ITextTemplating and have some T4-based codegen going! Ah, if only it was that easy. Turns out by default, none of the VS built-in services are exposed to MEF, apparently because there wasn't enough time to analyze the lifetime, initialization, dependencies, etc. for each one before launch, which makes perfect sense. You don't want to blindly export everything now just in case. There's also the whole VS package initialization thing which in this version of VS is not so transparently integrated with the MEF publishing side (i.e. a MEF export from a package can get instantiated before its owning package, and in fact, the package can remain unloaded forever and the export will continue to be visible to anyone)....Read full article

    Read the article

  • Exporting PowerPoint Slides with Specific Heights and Widths

    - by Damon Armstrong
    I found myself in need of exporting PowerPoint slides from a presentation and was fairly excited when I found that you could save them off in standard image formats. The problem is that Microsoft conveniently exports all images with a resolution of 960 x 720 pixels, which is not the resolution I wanted.  You can, however, specify the resolution if you are willing to put a macro into your project: Sub ExportSlides()   For i = 1 To ActiveWindow.Selection.SlideRange.Count     Dim fileName As String     If (i < 10) Then       fileName = "C:\PowerPoint Export\Slide" & i & ".png"     Else       fileName = "C:\PowerPoint Export\Slide0" & i & ".png"     End If     ActiveWindow.Selection.SlideRange(i).Export fileName, "PNG", 1280, 720   Next End Sub When you call the Export method you can specify the file type as well as the dimensions to use when creating the image.  If the macro approach is not your thing, then you can also modify the default settings through the registry: http://support.microsoft.com/kb/827745

    Read the article

  • Models with more than one mesh in JMonkeyEngine

    - by Andrea Tucci
    I’m a new jmonkey engine developer and I’m beginning to import models. I tried to import simple models and no problems appeared, but when I export some obj models having more than one mesh in the OgreXML format, Blender saves multiple meshes with their own materials (e.g. one mesh for face, another for body etc). Can I export all the meshes in one? I’ve tried to join all the meshes to a major one with blender (face joins body), but when I export the model and then create the Spatial in jme(loading the path of the “merged” mesh), all the meshes that are joined to the major doesn’t have their materials! I give a more clear example: I have an .obj model with 3 meshes and I export it. I have : mesh1.mesh.xml , mesh2.mesh.xml , mesh3.mesh.xml and their materials mesh1.material, mesh2.material mesh3.material so I import the folder in Assets/Models/Test and now I have to create something like: Spatial head = assetManager.loadModel( [path] ); Spatial face = assetManager.loadModel( [path] ) one for each mesh and than attach them to a common node. I think there is a way to merge those mesh maintaining their materials! What do you think? Thanks

    Read the article

  • create a .deb Package from scripts or binaries

    - by tdeutsch
    I searched for a simple way to create .deb Packages for things which have no source code to compile (configs, shellscripts, proprietary software). This was quite a problem because most of the package tutorials are assuming you have a source tarball you want to compile. Then I've found this short tutorial (german). Afterwards, I created a small script to create a simple repository. Like this: rm /export/my-repository/repository/* cd /home/tdeutsch/deb-pkg for i in $(ls | grep my); do dpkg -b ./$i /export/my-repository/repository/$i.deb; done cd /export/avanon-repository/repository gpg --armor --export "My Package Signing Key" > PublicKey apt-ftparchive packages ./ | gzip > Packages.gz apt-ftparchive packages ./ > Packages apt-ftparchive release ./ > /tmp/Release.tmp; mv /tmp/Release.tmp Release gpg --output Release.gpg -ba Release I added the key to the apt keyring and included the source like this: deb http://my.default.com/my-repository/ ./ It looks like the repo itself is working well (I ran into some problems, to fix them I needed to add the Packages twice and make the temp-file workaround for the Release file). I also put some downloaded .deb into the repo, it looks like they are also working without problems. But my self created packages didn't... Wenn i do sudo apt-get update, they are causing errors like this: E: Problem parsing dependency Depends E: Error occurred while processing my-printerconf (NewVersion2) E: Problem with MergeList /var/lib/apt/lists/my.default.com_my-repository_._Packages E: The package lists or status file could not be parsed or opened. Has anyone an idea what I did wrong?

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • Is using the student version of 3DS Max and Unity3d legal?

    - by SubZeron
    I am developing an indie game together with my friend using Unity3D engine. I bought "Silo 3D" for modeling two month ago and for texturing I use 3D coat. We plan to sell our game in the future. For the animations I work with 3DS max (only animation part). My question is, can I work with a students license? The license for the original version is too expensive for me. I am still at the university and I can not buy the 3DS Max license which costs 4000 €. As an alternative I have the choice beetween Blender (can´t work with this software and don't have time to invest for learning a new program) and Truespace (can´t export fbx animation and specially with bones) so for me, 3DS Max is the best choice to be effective and quick. Is it possible to prove it when I export my fbx characters from 3DS Max to Unity3D? I mean can they find out that I have used the students license of 3DS Max for the animations after the release of the game? Maybe with help of DRM? Can I solve that problem when I export the fbx from 3DS Max to Blender and after that export the same fbx to Unity3D?

    Read the article

  • Animations in FBX exported from Maya are anchored in the wrong place

    - by Simon P Stevens
    We are trying to export a model and animation from Maya into Unity3d. In Maya, the model is anchored (pivot point) at the feet (and the body moves up and down). However after we have performed the FBX export, and imported the file into Unity the model is now appears to be anchored by the waist/head and the feet move. These example videos probably help explain the problem more clearly: Example video - Maya - Correct Example video - Unity - Wrong We have also noticed that if we take the FBX file and import it back into Maya we have exactly the same problem. It seems to be that the constraints no longer work after the FBX is reimported back to Maya, which just kills the connection between the joints and the control objects. When we exported the FBX we have tried checking the 'bake animations' check box. The fact that the same problem exist when importing the FBX back into both Maya and Unity suggests that the source of the problem is most likely with the Maya FBX export. Has anyone encountered this problem before and have any ideas how to fix it?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >