Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 402/895 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • dig show only answer

    - by Zulakis
    I want dig only to show the answer of my query. Normally, it prints out alot of additional info like this: ;; <<>> DiG 9.7.3 <<>> google.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55839 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;google.de. IN A ;; ANSWER SECTION: google.de. 208 IN A 173.194.69.94 ;; Query time: 0 msec ;; SERVER: 213.133.99.99#53(213.133.99.99) ;; WHEN: Sun Sep 23 10:02:34 2012 ;; MSG SIZE rcvd: 43 I want this to be reduced to just the answer section. dig has alot of options, a good one i found was +noall +answer ; <<>> DiG 9.7.3 <<>> google.de +noall +answer ;; global options: +cmd google.de. 145 IN A 173.194.69.94 It leaves out most of the stuff, but still shows this options thing. Any ideas on how to remove it using dig options? I sure could cut it out using other tools, but a option with dig itself would be the cleanest and nicest.

    Read the article

  • mount qcow2 snapshots

    - by phhe
    I'm running some Xen-servers and started migrating to KVM. Currently my guests are either running on raw-images or LVMs. I found libvirt providing some very nice snapshot features (virsh snapshot-create, ...) so I decided to use qcow2 instead of raw/lvm. And here is my question: libvirt creates the same sort of snapshots on the qcow2 image as if I use qemu-img - is it possible to mount them ? I read something about qemu-nbd and the possibility of mounting qcow but I could not find a word about snapshots.

    Read the article

  • One hard drive missing after converting from ide to ahci

    - by user700859
    I have two 500GB SATA HDD in my desktop running XP. After converting from IDE to AHCI, the HDD I use to store data can no longer be accessed from XP. It shows only the first partition as 150GB, and data type is listed as RAW. However, if I boot into a live linux environment or if I pull that HDD out and connect it using USB, I can still get to all my files again. So I completely formatted the HDD into one single partition. But XP still does not recognize it. It still shows the disk as the 150GB partition and data type is still RAW. What's going on and how do I fix this? Thanks in advance.

    Read the article

  • Downloading file from FTP using cURL

    - by Josiah
    I'm trying to use a cURL command to download a file from an FTP server to a local drive on my computer. I've tried curl "ftp://myftpsite" --user name:password -Q "CWD /users/myfolder/" -O "myfile.raw" But it returns an error that says: curl: Remote file name has no length! curl: try 'curl --help' or 'curl --manual' for more information curl: (6) Could not resolve host: myfile.raw; No data record of requested type I've tried some other methods, but nothing seems to work. Also, I'm not quite sure how to specify which folder I want the file to be downloaded to. How would I do that?

    Read the article

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • How to set up Nginx as a caching reverse proxy?

    - by Continuation
    I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it. I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache. Ideally I'd want to invalidate cache in 2 ways: Set an expiration date on the cached item To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages Is it possible to set Nginx to do that? How?

    Read the article

  • DosBox Booting From HDD Image, FreeDOS Image created with qemu-img.

    - by TechZilla
    I'm having trouble booting a HDD image with DosBox. I've only gotten either read errors, or boot failures. The HDD image is a verified working FreeDOS installation, created with qemu-img. The image has been formatted FAT32, and it's working as expected with QEMU. The Image is only 1G in size, and is a flat raw image. I have been able to mount it with Linux, for ease of file transfer. I even was able to boot with DOSEMU, After I mounted the image under Linux. I would love to somehow just boot from the raw image file, but I would have no problem booting from a mount. I just can't get anything to happen, and I have read the Documents over. I have verified DosBox is working as expected, with its included DOSlike environment. I would appreciate any help, as I just don't have much of DosBox experience.

    Read the article

  • Building intranet search

    - by gmkv
    At work, we have lots of information squirreled away in many different sites -- wikis, product docs, ticketing system, etc -- many of which require authentication. I'm very interested in having a single way to search all our various silos, and in my spare time have looked at Nutch, Grub, Django + Haystack, etc. None of these is a complete solution a la Google Mini or Google Search Appliance. Has anybody built a basic intranet search engine out of a mixture of these tools? Would you have recommendations about how to go about it? I like Django, and Haystack seems to be a mildly popular search solution for it, but I'd need to wire up a crawler that can support crawling authenticated sites to it.

    Read the article

  • Forward Request to Multiple Servers

    - by cactuarz
    We have 2 servers. One is old server and another is the new one. Currently we about doing a migration because the old server is not capable enough to handle everyday requests. The specs are: Old server Ubuntu 10.04 Nginx as Reverse Proxy Apache WSGI Python/Django New Server Ubuntu 10.04 Nginx Gunicorn Python/Django Celery+Redis Our manager asked us to research if the old server can perform multiple forwarding to all incoming request, for example, set Nginx of old server to forward all request to both old and new server. The purpose is to perform unit testing to new server using old server as comparer, see if the new server is ready to take over the role. Please help, if there is an idea, or must install some engine, or what we do is impossible. Many thanks.

    Read the article

  • How to set up Nginx as a caching reverse proxy?

    - by Continuation
    I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it. I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache. Ideally I'd want to invalidate cache in 2 ways: Set an expiration date on the cached item To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages Is it possible to set Nginx to do that? How?

    Read the article

  • Excel VLOOKUP using results from a formula as the lookup value [on hold]

    - by Rick Deemer
    I have a cell that I must remove the first 2 characters "RO" for each value in a column on a sheet called RAW DATA and put into a cell on a sheet called ROSS DATA. Some of the values in that cell have 3 digits after the "RO", and some have 5 digits. To do that I used =REPLACE('RAW DATA'!A3,1,2,"") Then I need to use this new resultant string as the lookup value in a VLOOKUP. The VLOOKUP will be looking at a named range called DAP on a sheet called DAP, in column 5 for an exact match, and I need it to return that value to the cell. I have tried using INDIRECT in different ways to no avail, and I'm not sure that I fully understand its usage. So at this point I am Googling for a method to do this and at a standstill.

    Read the article

  • Software update shows "P??roblem with the system installer tool" error in OS X Lion

    - by Elnaz Shahmehr
    I can't update my system. Does anyone know what the problem is? sudo softwareupdate -i -a Password: Software Update Tool Copyright 2002-2010 Apple Downloading Digital Camera Raw Compatibility Update Waiting to install Digital Camera Raw Compatibility Update Downloading Java for OS X 2012-002 Verifying Java for OS X 2012-002 Waiting to install Java for OS X 2012-002 Downloading iTunes Waiting to install iTunes Downloading Safari Verifying Safari Waiting to install Safari Checking packages… Installing Waiting for other installations to complete… Validating packages… Writing files… Package failed: There was a problem with the system installer tool. Package failed: There was a problem with the system installer tool. Package failed: There was a problem with the system installer tool. Package failed: There was a problem with the system installer tool. Done.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Failed to get i915 symbols, graphics turbo disabled

    - by Optimus Prime
    I'm getting "Failed to get i915 symbols, graphics turbo disabled" error message after installing following softwares and few updates from Ubuntu. Django, Mysql server 5.5 Mysql benchmark And i have installed few updates for ubuntu. It was showing as Security Updates for Ubuntu. After installing Updates the update manager showed that i should restart the system. On restart i got following error message. " failed to get i915 symbols, graphics turbo disabled". So i tried the work around mentioned here (using the Live CD) ie add intel_ips to the blacklist echo "blacklist intel_ips" /etc/modprobe.d/blacklist.conf add i915 and intel_ips to /etc/modules echo -e "i915/nintel_ips" /etc/modules Now when start the system it freezes at Ubuntu splash screen. I'm using Ubuntu 12.04 LTS, on Dell inspiron N1040. I need to boot the system as i have spend lot of days configuring. Python and Django. Please help EDIT : OK when i restarted the system yesterday it magically turned on. Now i can view my desktop. But one problem, i can't mount any of the drives. It says failed to mount Drive. I'm also frequently getting "Ubuntu System Failure" Error Message.

    Read the article

  • Exalogic&ndash;The One Day Installation Challenge

    - by james.bayer
    It’s a really exciting time for the extended WebLogic community as we are enjoying seeing the impressive results of Exalogic deployments.  At Oracle Open World, a lot of people I spoke with came away impressed with the raw performance.  However, Exalogic offers a lot more than just raw performance.  I had the pleasure of working with Ram Sivaram during one of the Exalogic training sessions in Santa Clara.  In this video diary, he shows the Exalogic machine arrive on the shipping dock, get unpacked, wired up, powered on, configured, and installed with a WebLogic Server cluster in just about 10 hours.  I’ve worked with customers in the past that have taken several weeks or longer to get an environment ready after the hardware arrives.  This typically involves many different specialized teams in their organization.  Mohamad Afshar just wrote a great explanation of the benefit of Engineered Systems and contrasting that to the status quo.  Being able to streamline deployment of middleware capacity will have a lot of value for customers shortening time to deployment.  Thanks for the video Ram, you’ve set a high bar, we’ll see if anyone can top your time!  

    Read the article

  • Game software design

    - by L. De Leo
    I have been working on a simple implementation of a card game in object oriented Python/HTML/Javascript and building on the top of Django. At this point the game is in its final stage of development but, while spotting a big issue about how I was keeping the application state (basically using a global variable), I reached the point that I'm stuck. The thing is that ignoring the design flaw, in a single-threaded environment such as under the Django development server, the game works perfectly. While I tried to design classes cleanly and keep methods short I now have in front of me an issue that has been keeping me busy for the last 2 days and that countless print statements and visual debugging hasn't helped me spot. The reason I think has to do with some side-effects of functions and to solve it I've been wondering if maybe refactoring the code entirely with static classes that keep no state and just passing the state around might be a good option to keep side-effects under control. Or maybe trying to program it in a functional programming style (although I'm not sure Python allows for a purely functional style). I feel that now there's already too many layers that the software (which I plan to make incredibly more complex by adding non trivial features) has already become unmanageable. How would you suggest I re-take control of my code-base that (despite being still only at < 1000 LOC) seems to have taken a life of its own?

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Getting MTP to work with a Galaxy tab 2 7.0?

    - by Wouter
    I'm trying to get MTP with the galaxy tab 2 7.0 working on my ubuntu installation. Such that I can access the files. I tried to do what is described here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access I however fail at executing one of the following commands mtp-detect | grep idVendor mtp-detect | grep idProduct This fails [20:42|0] $ mtp-detect | grep idVender Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 [20:44|0] $ mtp-detect | grep idProduct Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 Now my guess was was that the idVender is the same as the VID (04e8) and the idProduct is the same as PID (6860) Now I continued to work with those values and completed the tutorial. When finished I tried android-connect This returned fuse: bad mount point `/media/GalaxyTab': Transport endpoint is not connected Does anybody have a clue what to do? Also I want to note that when I connect my GalaxyTab 2 7.0 that I still get a pop-up of ubuntu that a device was connected. I also can still see the mapstructure, the problem however is is that all the folders have 0 bytes and do not have any subfolders. I can only see the folders in the root. ps. I also checked a similar question and tried what is described in this answer http://askubuntu.com/a/88630/27480

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • how to choose a web framework and javascript library?

    - by Trylks
    I've been procrastinating learning some framework for web apps w/ some library for AJAX, something like django with prototype, or turbogears with mootools, or zeta components with dojo, grok, jquery, symfony... The point is to spend some of my spare time, have "fun" and create cool stuff that hopefully is some useful. I think maybe I wouldn't like something like GWT or pyjamas because I wouldn't like to "get married" with some technology, I want to keep my freedom to add another javascript library, and so on. I didn't decide even the language yet, but I think I'd prefer python. PHP could be fine if there is some framework that is nice enough. Besides that, I don't even know where to start. I don't feel like learning a framework to then realize there is something that I cannot comfortably do, switch to another framework then find that a third framework has something really cool, etc. And the same goes for javascript libraries. So, some guidance would be really appreciated. I don't really know why are so many options available and what do they aim for, I guess some of them focus on some aspects and some on others, but I just want to make cool and nice apps that I can easily maintain, without spending too much time on coding or learning and avoiding the "trapped in the framework" feeling, when doing something is awfully complicated (or even impossible) with compared with the rest of things or doing that same thing on a different framework. I guess in the end I'll go for django and jquery since they are the most widely used options, afaik, but if I was going for the most widely used options I guess I should choose Java or PHP (I don't really like Java for my spare time, but php is not so bad), so I preferred to ask first. I think the question has to consider both, framework and library, since sometimes they are coupled. I think this is the place to ask this kind of things, sorry if not, and thank you.

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • Where to implement storable items

    - by James Hay
    I'm creating a multiplayer online trading game. The things that are traded range from raw items to complex products. For example Steel is a raw item. Mechanical Assembly is a more complex item that requires 2x Steel and maybe 1x Rubber. Then Hydraulics is an item that contains 2x Mechanical Assemblies and 1x Electronics (which is another complex item). So and so forth. These items will be created by me, players can't create their own items, so it doesn't need to be able to handle arbitrary layers of complexity for items. If my example isn't clear, think Minecraft. You have wooden planks, which can be made into sticks. From there the sticks - combined with metals - can be made into tools. My game is nothing to do with minecraft or any sandbox building game, but it uses a similar progressive complexity to creating items that I want to have in my game. My question is basically, how do you store something like this assuming that I will want to add more items in the future? Do you store it in a database or in a seperate library that the game uses? EDIT None of the items actually "do" anything, they are simply there to either sell, purchase, or combine with other items to make a more complex item, which can then be sold, purchased or combined... you get the idea. The items themselves would not have any properties, but the instances of the items would. For example an item that one player has would have a certain "quality" and if they were selling it a certain "price". An instance of that same item that a different player had would need to have a different "quality" and "price" if they were selling it. I think the price part will not be required on an individual item because instead I would have a "sale" object which was for a price and contained certain items.

    Read the article

  • How to deal with the need to know multiple programming languages? When to stop learning new languages?

    - by Raphael
    I am a relatively young programmer. I am 23 and I have been programming professionally for about 5 years. As most programmers I started with C, learned some x86 assembly for fun and then I found C++ which turned out to be my greatest passion in the programming world. Programming with C and C++ forces you to learn platform specific APIs, libs and frameworks all of each requires constant study and experimentation. After some time I had to move on to Java and C# as the demand on my region is basically for these languages. With these languages I entered the world of web development and then I had to learn javascript. Developing for the .NET Framework was exciting at first but I constantly felt as I was getting tied up by Microsoft (and of course the .NET Framework was driving me away from Linux). For desktop development I could do pretty much everything I did with .NET using C++ with Qt but for web development I had to look for an alternative. Quickly I found Django and then I proceeded to learn Python so I could use Django. Nowadays I am learning iOS development with Objective-C. So far it was pretty much easy to learn all these languages (C++ trained me well) but I am worried that someday I won't be able to keep track of them all. Just to clarify. The only languages I learned cause I had to were C# and Java. All of the others I learned for fun, because I love programming and learning new things. Also I like to keep my skills sharp on desktop, web and mobile development. My question is: How do you keep track of multiple programming languages? (I mean, keep track of changes to these languages and keep your skills sharp) and: Is there such a thing as enough programming languages?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >