Search Results

Search found 260 results on 11 pages for 'mat harden'.

Page 10/11 | < Previous Page | 6 7 8 9 10 11  | Next Page >

  • Memory footprint of a parsed XML file in Classic ASP?

    - by Pete Duncanson
    Anyone know of a way to find out the amount of memory/size of a XMLDocument once it has parsed a XML file? I've been doing "beer mat" calculations so far but have been asked to come up with some more legit numbers through monitoring some how. I need to create about 1500 XML files (via FreeThreadedXMl-DOM object), which verge between 3-9K in size and store them in Application vars but our SysAdmin is worried about us gobbling up too much memory. Other than the crude method of booting up a fresh IIS instance and then loading everything in and monitoring before and after memory usage in Task Manager I can't think of a way of doing it with a bit more accuracy.

    Read the article

  • opencv image conversion from rgb to hsv

    - by kaushalyjain
    When I run this following code on a sample image ie an rgb image; and then execute it to display the converted hsv image, both appear to be different... can anyone explain why? or can you suggest a solution for this not to happen... coz its the same image afterall Mat img_hsv,img_rgb,red_blob,blue_blob; img_rgb = imread("pic.png",1); cvtColor(img_rgb,img_hsv,CV_RGB2HSV); namedWindow("win1", CV_WINDOW_AUTOSIZE); imshow("win1", img_hsv);

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Using polyfit to predict where the object falls ?

    - by ZaZu
    Hi there, I have information of an object being thrown at a parabolic pattern. There are 30 images in total taken at specific intervals from the start position till the end. Now I have managed to extract the x,y coordinates of the object being thrown in all 30 images... I think that using polyfit (or maybe polyval ? ) may help me predict where the object will fall after the first 15 images ... I just want to know, how can polyfit be used with the 30 x,y coordinates I have ? ( I have a loop to extract each image from a mat file 1 row at a time until 30 .. and then plot that image .. so should I use polyfit in the same loop before/after the plot ??? Any ideas ?? Thanks !

    Read the article

  • How to analyse Dalvik GC behaviour?

    - by HRJ
    I am developing an application on Android. It is a long running application that continuously processes sensor data. While running the application I see a lot of GC messages in the logcat; about one every second. This is most probably because of objects being created and immediately de-referenced in a loop. How do I find which objects are being created and released immediately? All the java heap analysis tools that I have tried(*) are bothered with the counts and sizes of objects on the heap. While they are useful, I am more interested in finding out the site where temporary short-lived objects get created the most. (*) I tried jcat and Eclipse MAT. I couldn't get hat to work on the Android heap-dumps; it complained of an unsupported dump file version.

    Read the article

  • C++ operator overloading doubt

    - by avd
    I have a code base, in which for Matrix class, these two definitions are there for () operator: template <class T> T& Matrix<T>::operator() (unsigned row, unsigned col) { ...... } template <class T> T Matrix<T>::operator() (unsigned row, unsigned col) const { ...... } One thing I understand is that the second one does not return the reference but what does const mean in the second declaration. Also which function is called when I do say mat(i,j)

    Read the article

  • Image ransfer using hessian protocol from client's folder to tomcat server

    - by ?? ?
    My goal is to upload a image(.jpg or .png)from client's folder to tomcat6 server through hessian protocol. And do image processing using opencv on server, then return the image back to client. Question1. Is the following transfering steps correct? put a test.jpg image on client's folder -- convert the test.jpg in client.java(main.java) class to BufferedImage -- convert the BufferedImage to mat or Iplimage in server for using openCV.I have set a hello world sample from Simple Messaging Example using Hessian , and searched from Hessian with large binary data and other websites, but still dont know how to use it! Question2. Is there a related JAVA sample code? Thank you very much. Btw, I am using ubuntu12+netbeans7.2

    Read the article

  • ATG Live Webcast March 29: Diagnosing E-Business Suite JVM and Forms Performance Issues (Performance Series Part 4 of 4)

    - by BillSawyer
    The next webcast in our popular EBS series on performance management is going to be a showstopper.  Dave Suri, Project Lead, Applications Performance and Gustavo Jimenez, Senior Development Manager will discuss some of the steps involved in triaging and diagnosing E-Business Suite systems related to JVM and Forms components. Please join us for our next ATG Live Webcast on Mar. 29, 2012: Triage and Diagnostics for E-Business Suite JVM and Forms The topics covered in this webcast will be: Overall Menu/Sections Architecture Patches/Certified browsers/jdk versions JVM Tuning JVM Tools (jstat,eclipse mat, ibm tda) Forms Tools (strace/FRD) Java Concurrent Program options location Case studies Case Studies JVM Thread dump case for Oracle Advanced Product Catalog Forms FRD trace relating to Saving an SR Java Concurrent Program for BT Date:               Thursday, March 29, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Dave Suri, Project Lead, Applications Performance                        Gustavo Jimenez, Senior Development ManagerWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:            877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99342To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597073984 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com. 

    Read the article

  • So long Oracle ...

    - by arungupta
    ... and thanks for all the fish! This Friday (October 18, 2013) is my last day at Oracle. After Publishing almost 1400 blog entries with 5500+ comments on them Working in the Java EE team since inception Visiting 35+ countries and several cities around the world Speaking at all major Java conferences and lots of Java User Groups 15-year alumni of JavaOne as staff Meeting and working with best of the best in the Java community Most importantly having lots of fun Its time for me to move on! No new blog entries will be posted on this blog. Feel free to subscribe to The Aquarium for latest updates on Java EE and GlassFish. I'll continue to publish all the excellent content that you've been used to at blog.arungupta.me now onwards. Read my new blog to learn about my new adventures! Here are some of the conference badges collected over the past years ... And the cities visited ... View Cities Visited by "Miles To Go..." in a larger map The comments on this blog are disabled as I'll not be able to respond to them. Feel free to leave comments on the new blog and I'd love to follow up with you there. Thank you very much for all the support that has been shown on this blog. I'd like to conclude with a Hindi song that I've been humming for the past few days now ... Abhi alvida mat kaho doston ... Na jaane kahan phir mulaqaat ho ... Kyonki ... Beete huye lamhon ki kasak saath to hogi ... Khawabon mein hi ho chahe mulaqaat to hogi ... For my non-Hindi readers, here is my paraphrased meaning ... Don't say goodbye yet my friends ... We'll likely meet somewhere else ... Because ... We'll always have the memories of the wonderful time spent together ... May be in dreams but we will meet again ... With that, over and out, and see you at blog.arungupta.me!

    Read the article

  • Oracle Text????~????????????????????????

    - by Yuichi Hayashi
    Oracle Text?? ????????????????????????????????????? ??????????????????????????????????????? ??????????????????????????????? Oracle Text ????????????????? ????? Oracle Text ??Oracle Database ????????????????????? Oracle Text ????????????????·?????????????? ???Edition???????? - Oracle Database Enterprise Edition(EE) - Oracle Database Standard Edition(SE) - Oracle Database Standard Edition One - Oracle Database Express Edition(XE) ?????? Oracle ??????? Database Configuration Assistant(DBCA)?????????????Oracle Text ?????????????????? ??????????????????????????????? ???????????????????(?????)????????????????????????????????? ?????????????????????????????????????? (1) ~ (4)???????????????????? (1) ????? Oracle Text???????(ctxsys)???????????????(????? SCOTT)???? CTXAPP?????????? SQL connect ctxsys/ SQL grant ctxapp to scott; (2) ???? ? ?????? SQL connect scott/tiger SQL create table test ( 2 id number primary key, 3 text varchar2(80) ); SQL insert into test ( id, text ) values ( 1, 'The cat sat on the mat' ); SQL insert into test ( id, text ) values ( 2, 'The dog barked like a dog' ); SQL insert into test ( id, text ) values ( 3, '??????????' ); SQL commit; (3) ???????(??) ??????????????????? ?????????: test_lexer ???????? JAPANESE_VGRAM_LEXER????????? SQL connect scott/tiger SQL execute ctx_ddl.create_preference('test_lexer','JAPANESE_VGRAM_LEXER'); ???? ???? OracleText???????????????????????????????????????????????????? ???? ???????????????????? ??????????????????????????????????? ??????? - JAPANESE_VGRAM_LEXER:????2???????????????????? - JAPANESE_LEXER (Oracle Text 9.0.1???????):???????????????????????????? ??????? ????????????????????????????????????????????????????????????? (4) ????????? TEST?????????????????? SQL create index test_idx on test ( text ) 2 indextype is ctxsys.context 3 parameters ('lexer test_lexer'); (5) ????????? "??"?????????????????? SQL col text for a30 SQL select id, text from test 2 where contains ( text, '??') 0; ID TEXT ---------- ------------------------------ 3 ?????????? ¦???? ???????/???Oracle Text ?? ????????Oracle Text ????

    Read the article

  • What characters are NOT escaped with a mysqli prepared statement?

    - by barfoon
    Hey everyone, I'm trying to harden some of my PHP code and use mysqli prepared statements to better validate user input and prevent injection attacks. I switched away from mysqli_real_escape_string as it does not escape % and _. However, when I create my query as a mysqli prepared statement, the same flaw is still present. The query pulls a users salt value based on their username. I'd do something similar for passwords and other lookups. Code: $db = new sitedatalayer(); if ($stmt = $db->_conn->prepare("SELECT `salt` FROM admins WHERE `username` LIKE ? LIMIT 1")) { $stmt->bind_param('s', $username); $stmt->execute(); $stmt->bind_result($salt); while ($stmt->fetch()) { printf("%s\n", $salt); } $stmt->close(); } else return false; Am I composing the statement correctly? If I am what other characters need to be examined? What other flaws are there? What is best practice for doing these types of selects? Thanks,

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • Is it safe to enable forced ASLR via EMET on Windows?

    - by D.W.
    I'd like to enable forced ASLR for all DLLs on Windows. Is this safe? Background: ASLR is an important security mechanism that helps defend against code injection attacks. DLLs can opt into ASLR, and most do, but some DLLs have not opted into ASLR. If a program loads even a single non-ASLRized DLL, then the program doesn't get the benefit/protection of ASLR. This is a problem, because there are a non-trivial number of DLLs that haven't opted into ASLR. For instance, it was recently revealed that Dropbox injects a DLL into a bunch of processes, and the Dropbox DLL doesn't have ASLR turned on, which negates any ASLR protection they otherwise would have had. Unfortunately, there are many other widely used DLLs that haven't opted into ASLR. This is bad for system security. Microsoft provides several ways to turn on ASLR for all DLLs, even ones that haven't opted into ASLR: On Windows 7 and Windows Server 2008, you can enable "Force ASLR" in the registry. On all Windows versions, you can use Microsoft's EMET tool and enable EMET's "Mandatory ASLR" option. These methods are possible because all DLLs are compiled as position-independent code and they can be relocated to a random location even if they haven't opted into ASLR. These options will ensure that ASLR is turned on, even if the developers of the DLL forgot to opt into ASLR. Thus, forcing on ASLR systemwide may help system security. In principle, turning on forced ASLR could potentially break a poorly-written DLL, so there is some risk of breakage. I'm interested in finding out just significant this risk is. I have the suspicion that this kind of breakage might be extremely rare. Here's what I've been able to find: Microsoft has done compatibility testing with several dozen widely used applications. The only one they found where Mandatory ASLR causes problems is Windows Media Player. All the other applications continue working fine. (See pp.39-41 of this document.) I've seen some anecdotal reports that enabling "Mandatory ASLR"/"Force ASLR" is fine and unlikely to cause problems. CERT reports that AMD and ATI video drivers used to crash if you enabled forced ASLR, but their latest drivers have now fixed this problem. They don't show any other drivers with this problem. A forum post from Microsoft shows no other applications with compatibility problems if ASLR is forced on, as of 2011. A user reports that borderlands.exe, a video game by Gearbox Software, crashes if you turn on mandatory ASLR. What else should I know? Is it relatively safe to turn on Force ASLR / Mandatory ASLR systemwide to harden the secuity of my system, or will I be in for a world of pain and broken applications? How significant is the risk of compatibility problems and broken applications?

    Read the article

  • Using OpenCV in QTCreator (linking problem)

    - by Jane
    Greetings! I have a problem with the linking simpliest test program in QTCreator: CODE: #include <QtCore/QCoreApplication> #include <cv.h> #include<highgui.h> #include <cxcore.hpp> using namespace cv; int _tmain(int argc, _TCHAR* argv[]) { cv::Mat M(7,7,CV_32FC2,Scalar(1,3)); return 0; } .pro file: QT -= gui TARGET = testopencv CONFIG += console CONFIG -= app_bundle INCLUDEPATH += C:/OpenCV2_1/include/opencv TEMPLATE = app LIBS += C:/OpenCV2_1/lib/cxcore210d.lib \ C:/OpenCV2_1/lib/cv210d.lib \ C:/OpenCV2_1/lib/highgui210d.lib\ C:/OpenCV2_1/lib/cvaux210d.lib SOURCES += main.cpp I've tried to use -L and -l like LIBS+= -LC:/OpenCV2_1/lib -lcxcored ang .pri file QMAKE_LIBDIR += C:/OpenCV2_1/lib/Debug LIBS += -lcxcore210d \ -lcv210d \ -lhighgui210d The errors are like debug/main.o:C:\griskin\test\app\testopencv/../../../../OpenCV2_1/include/opencv/cxcore.hpp:97: undefined reference to cv::format(char const*, ...)' Could anyone help me? Thanks! In Visual Studio it works but I need it works in QTCreator..

    Read the article

  • How to use opencv header in visual studio windows app

    - by yooo
    I did my work in visual studio 2010 c++ console , but now i am trying to convert my work into windows app (making interface of it) in visual studio c++ . For that i have to add some header files which i have to add manually in windows form application, like and it show me some error's in it like DetectRegions.h(10): error C2146: syntax error : missing ';' before identifier 'filename' DetectRegions.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(11): error C2061: syntax error : identifier 'string' DetectRegions.h(14): error C2143: syntax error : missing ';' before '<' DetectRegions.h(14): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(14): error C2238: unexpected token(s) preceding ';' DetectRegions.h(16): error C2143: syntax error : missing ';' before '<' DetectRegions.h(16): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(16): error C2238: unexpected token(s) preceding ';' DetectRegions.h(17): error C2061: syntax error : identifier 'RotatedRect' DetectRegions.h(18): error C2653: 'cv' : is not a class or namespace name DetectRegions.h(18): error C2146: syntax error : missing ';' before identifier 'histeq' DetectRegions.h(18): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(18): error C2061: syntax error : identifier 'Mat' DetectRegions.h(18): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int DetectRegions.h(18): warning C4183: 'histeq': missing return type; assumed to be a member function returning 'int' Plate.h is same like DetectRegions.h I add the other headers of opencv in Form1.h like #include "opencv2/features2d/features2d.hpp" #include <opencv/highgui.h> #include "opencv2/opencv.hpp" .......

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • JVM terminates when launching eclipse with J2SE 6.0 on mac os x (need J2SE 6.0 for Oracle enterprise

    - by rooban bajwa
    I know my issue has party been addressed at this link http://stackoverflow.com/questions/245803/jvm-terminates-when-launching-eclipse-mat-on-mac-os-with-j2se-60 but it was a year+ ago.. plus the link that's provided in there http://landonf.bikemonkey.org/static/soylatte/ does not seem to be alive (i mean the download section on that link no longer provide the 32-bit port of j2se 6.0 for mac osx 10.5) I am trying to run eclipse 3.5 on mac OSX 10.5. It works fine with J2SE 5.0. But when I installed the Oracle enterprise pack for eclipse - it requires to start eclipse with J2SE 6.0 JVM otherwise it will get disabled. Here's the exact message I get from it - "You are running Eclipse on Java VM version: 1.5.0_22 Oracle Enterprise Pack for Eclipse requires Java version 6 or higher. Click next to configure a compatible Java VM." It asks me to point to J2SE 6.0 JVM, when I do that (i.e point it to "/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home") , it asks to restart eclipse , when I do that, eclipse just bombs .. with JVM terminated error .. SO I need to start eclipse with J2SE 6.0 JVM but eclipse needs carbon which is only available in 32 bits and hence I cann't start eclipse with J2SE 6.0 JVM which is only available in 64bit mode from mac. And the site providing 32 bit port of J2SE 6.0 JVM does not seem to be active anymore.. Can someone help me on this issue, Thanks in advance,

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Postgraduate

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involve the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image seamless operations and transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For postgrad studies in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For postgrad studies, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your postgrad work, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your university research group? Thanks in advance.

    Read the article

  • AutoIt scripts runs without error but I can't see archive?

    - by Scott
    #include <File.au3> #include <Zip.au3> ; bad file extensions Local $extData="ade|adp|app|asa|ashx|asp|bas|bat|cdx|cer|chm|class|cmd|com|cpl|crt|csh|der|exe|fxp|gadget|hlp|hta|htr|htw|ida|idc|idq|ins|isp|its|jse|ksh|lnk|mad|maf|mag|mam|maq|mar|mas|mat|mau|mav|maw|mda|mdb|mde|mdt|mdw|mdz|msc|msh|msh1|msh1xml|msh2|msh2xml|mshxml|msi|msp|mst|ops|pcd|pif|prf|prg|printer|pst|reg|rem|scf|scr|sct|shb|shs|shtm|shtml|soap|stm|url|vb|vbe|vbs|ws|wsc|wsf|wsh" Local $extensions = StringSplit($extData, "|") ; What is the root directory? $rootDirectory = InputBox("Root Directory", "Please enter the root directory...") archiveDir($rootDirectory) Func archiveDir($dir) $goDirs = True $goFiles = True ; Get all the files under the current dir $allOfDir = _FileListToArray($dir) Local $countDirs = 0 Local $countFiles = 0 $imax = UBound($allOfDir) For $i = 0 to $imax - 1 If StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D") Then $countDirs = $countDirs + 1 ElseIf StringInStr(($allOfDir[$i]),".") Then $countFiles = $countFiles + 1 EndIf Next MsgBox(0, "Value of $countDirs in " & $dir, $countDirs) MsgBox(0, "Value of $countFiles in " & $dir, $countFiles) If ($countDirs > 0) Then Local $allDirs[$countDirs] $goDirs = True Else $goDirs = False EndIf If ($countFiles > 0) Then Local $allFiles[$countFiles] $goFiles = True Else $goFiles = False EndIf $dirCount = 0 $fileCount = 0 For $i = 0 to $imax - 1 If (StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D")) And ($goDirs == True) Then $allDirs[$dirCount] = $allOfDir[$i] $dirCount = $dirCount + 1 ElseIf (StringInStr(($allOfDir[$i]),".")) And ($goFiles == True) Then $allFiles[$fileCount] = $allOfDir[$i] $fileCount = $fileCount + 1 EndIf Next ; Zip them if need be in current spot using 'ext_zip.zip' as file name, loop through each file ext. If ($goFiles == True) Then $emax = UBound($extensions) $fmax = UBound($allFiles) For $e = 0 to $emax - 1 For $f = 0 to $fmax - 1 $currentExt = getExt($allFiles[$f]) If ($currentExt == $extensions[$e]) Then $zip = _Zip_Create($dir & "\" & $currentExt & "_zip.zip") _Zip_AddFile($zip, $allFiles[$f]) EndIf Next Next EndIf ; Get all dirs under current DirCopy ; For each dir, recursive call from step 2 If ($goDirs == True) Then $dmax = UBound($allDirs) $rootDirectory = $rootDirectory & "\" For $d = 0 to $dmax - 1 archiveDir($rootDirectory & $allDirs[$d]) Next EndIf EndFunc Func getExt($filename) $pos = StringInStr($filename, ".") $retval = StringTrimLeft($filename, $pos + 1) Return $retval EndFunc This should output the .zip archives in the directories it finds the files that it needs to zip but it doesn't. Is there something I have to do after I create and add files to the archive within the code to put this created archive in the directory?

    Read the article

  • AutoIt scripts runs without error but I can't see archive? - UPDATE

    - by Scott
    #include <File.au3> #include <Zip.au3> #include <Array.au3> ; bad file extensions Local $extData="ade|adp|app|asa|ashx|asp|bas|bat|cdx|cer|chm|class|cmd|com|cpl|crt|csh|der|exe|fxp|gadget|hlp|hta|htr|htw|ida|idc|idq|ins|isp|its|jse|ksh|lnk|mad|maf|mag|mam|maq|mar|mas|mat|mau|mav|maw|mda|mdb|mde|mdt|mdw|mdz|msc|msh|msh1|msh1xml|msh2|msh2xml|mshxml|msi|msp|mst|ops|pcd|pif|prf|prg|printer|pst|reg|rem|scf|scr|sct|shb|shs|shtm|shtml|soap|stm|url|vb|vbe|vbs|ws|wsc|wsf|wsh" Local $extensions = StringSplit($extData, "|") ; What is the root directory? $rootDirectory = InputBox("Root Directory", "Please enter the root directory...") archiveDir($rootDirectory) Func archiveDir($dir) $goDirs = True $goFiles = True ; Get all the files under the current dir $allOfDir = _FileListToArray($dir) $tmax = UBound($allOfDir) For $t = 0 to $tmax - 1 Next Local $countDirs = 0 Local $countFiles = 0 $imax = UBound($allOfDir) For $i = 0 to $imax - 1 If StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D") Then $countDirs = $countDirs + 1 ElseIf StringInStr(($allOfDir[$i]),".") Then $countFiles = $countFiles + 1 EndIf Next If ($countDirs > 0) Then Local $allDirs[$countDirs] $goDirs = True Else $goDirs = False EndIf If ($countFiles > 0) Then Local $allFiles[$countFiles] $goFiles = True Else $goFiles = False EndIf $dirCount = 0 $fileCount = 0 For $i = 0 to $imax - 1 If (StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D")) And ($goDirs == True) Then $allDirs[$dirCount] = $allOfDir[$i] $dirCount = $dirCount + 1 ElseIf (StringInStr(($allOfDir[$i]),".")) And ($goFiles == True) Then $allFiles[$fileCount] = $allOfDir[$i] $fileCount = $fileCount + 1 EndIf Next ; Zip them if need be in current spot using 'ext_zip.zip' as file name, loop through each file ext. If ($goFiles == True) Then $fmax = UBound($allFiles) For $f = 0 to $fmax - 1 $currentExt = getExt($allFiles[$f]) $position = _ArraySearch($extensions, $currentExt) If @error Then MsgBox(0, "Not Found", "Not Found") Else $zip = _Zip_Create($dir & "\" & $currentExt & "_zip.zip") _Zip_AddFile($zip, $dir & "\" & $allFiles[$f]) EndIf Next EndIf ; Get all dirs under current DirCopy ; For each dir, recursive call from step 2 If ($goDirs == True) Then $dmax = UBound($allDirs) $rootDirectory = $rootDirectory & "\" For $d = 0 to $dmax - 1 archiveDir($rootDirectory & $allDirs[$d]) Next EndIf EndFunc Func getExt($filename) $pos = StringInStr($filename, ".") $retval = StringTrimLeft($filename, $pos - 1) Return $retval EndFunc Updated, fixed a lot of bugs. Still not working. Like I said I have a list of 'bad' file extensions, this script should go through a directory of files (and subdirectories), and zip up (in separate zip files for each bad extension), all files WITH those bad extensions in the directories it finds them. What is wrong???

    Read the article

< Previous Page | 6 7 8 9 10 11  | Next Page >