Search Results

Search found 4167 results on 167 pages for 'rman 11gr2 duplicate'.

Page 155/167 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • generate k distinct number less then n

    - by davit-datuashvili
    hi i have following question task is this generate k distinct positive numbers less then n without duplication my method is following first create array size of k where we should write these numbers int a[]=new int[k]; //now i am going to cretae another array where i check if (at given number position is 1 then generate number again else put this number in a array and continue cycle i put here a piece of code and explanations int a[]=new int[k]; int t[]=new int[n+1]; Random r=new Random(); for (int i==0;i<t.length;i++){ t[i]=0;//initialize it to zero } int m=0;//initialize it also for (int i=0;i<a.length;i++){ m=r.nextInt(n);//random element between 0 and n if (t[m]==1){ //i have problem with this i want in case of duplication element occurs repeats this steps afain until there will be different number else{ t[m]=1; x[i]=m; } } so i fill concret my problem if t[m]==1 it means that this element occurs already so i want to generate new number but problem is that number of generated numbers will not be k beacuse if i==0 and occurs duplicate element and we write continue then it will switch at i==1 i need like goto for repeat step or for (int i=0;i<x.length;i++){ loop: m=r.nextInt(n); if ( x[m]==1){ continue loop; } else{ x[m]=1; a[i]=m; continue;//continue next step at i=1 and so on } } i need this code in java please help

    Read the article

  • SQL -- How is DISTINCT so fast without an index?

    - by Jonathan
    Hi, I have a database with a table called 'links' with 600 million rows in it in SQLite. There are 2 columns in the database - a "src" column and a "dest" column. At present there are no indices. There are a fair number of common values between src and dest, but also a fair number of duplicated rows. The first thing I'm trying to do is remove all the duplicate rows, and then perform some additional processing on the results, however I've been encountering some weird issues. Firstly, SELECT * FROM links WHERE src=434923 AND dest=5010182. Now this returns one result fairly quickly and then takes quite a long time to run as I assume it's performing a tablescan on the rest of the 600m rows. However, if I do SELECT DISTINCT * FROM links, then it immediately starts returning rows really quickly. The question is: how is this possible?? Surely for each row, the row must be compared against all of the other rows in the table, but this would require a tablescan of the remaining rows in the table which SHOULD takes ages! Any ideas why SELECT DISTINCT is so much quicker than a standard SELECT?

    Read the article

  • Merging rows with uniqueness constraints

    - by Flambino
    I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one. Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this time | user_id | task_id | date ------+----------+----------+----------- 10 | 1 | 1 | 2012-10-29 15 | 2 | 1 | 2012-10-29 10 | 1 | 2 | 2012-10-29 5 | 3 | 2 | 2012-10-29 and change it into this time | user_id | task_id | date ------+----------+----------+----------- 20 | 1 | 1 | 2012-10-29 <-- time values merged (summed) 15 | 2 | 1 | 2012-10-29 <-- no change 5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary) I.e. merge by summing the time values, where the given user_id/date/task combo would conflict. I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant. I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out. Any ideas?

    Read the article

  • How many hours a day (of the standard 8) do you actually work? [closed]

    - by someone
    Possible Duplicate: How much do you [really] work a day When I started working (not so long ago), I was very conscientious about really working. If I didn't work for 10 minutes at a time, I felt like I was cheating. But as I started to look around me, I realized that I was the only one... and most of my coworkers were spending a big percentage of their time browsing the internet or playing solitaire. I started to slack off a little more than usual... while still basically getting all my work done. But while I do all that's required of me, and usually quickly, I no longer beg for work to fill up my spare time; I'm content to do what I'm told and play around when no one makes sure I'm busy enough. Which means that I'm often bored and underutilized. (Which I was even when I begged for work - people are pretty laid back about the workload and don't seem to realize how much I can get done if pushed to the fullest.) But I was just talking to a friend who graduated with me and also recently started working... and she came to me with the same concerns about slacking. She's working remotely, which means there are often gaps in communication when she can't really get anything done... And she's feeling guilty about it. Which made me rethink the whole thing... So, as workers, how many hours, out of the 8 standard average, are you actually working (honestly)? And, as bosses, how many hours do you expect your workers to work? And from an ethical standpoint, how much free time, or space out time, can workers have during the day without being considered to be "cheating" their office of labor and money?

    Read the article

  • interface abstract in php real world scenario

    - by jason
    The goal is to learn whether to use abstract or interface or both... I'm designing a program which allows a user to de-duplicate all images but in the process rather then I just build classes I'd like to build a set of libraries that will allow me to re-use the code for other possible future purposes. In doing so I would like to learn interface vs abstract and was hoping someone could give me input into using either. Here is what the current program will do: recursive scan directory for all files determine file type is image type compare md5 checksum against all other files found and only keep the ones which are not duplicates Store total duplicates found at the end and display size taken up Copy files that are not duplicates into folder by date example Year, Month folder with filename is file creation date. While I could just create a bunch of classes I'd like to start learning more on interfaces and abstraction in php. So if I take the scan directory class as the first example I have several methods... ScanForFiles($path) FindMD5Checksum() FindAllImageTypes() getFileList() The scanForFiles could be public to allow anyone to access it and it stores in an object the entire directory list of files found and many details about them. example extension, size, filename, path, etc... The FindMD5Checksum runs against the fileList object created from scanForFiles and adds the md5 if needed. The FindAllImageTypes runs against the fileList object as well and adds if they are image types. The findmd5checksum and findallimagetypes are optionally run methods because the user might not intend to run these all the time or at all. The getFileList returns the fileList object itself. While I have a working copy I've revamped it a few times trying to figure out whether I need to go with an interface or abstract or both. I'd like to know how an expert OO developer would design it and why?

    Read the article

  • Rails 4 testing bug?

    - by Jamato
    Situation: if we add two identic line items into a cart, we update line item quantity instead of adding a duplicate.In browser everything works fine but in unit testing section something fails because of an empty cycle in code. Which I wanted to use to update all prices. Why? Is that a unit test engine bug? LineItem.all and cart.line_items in process of testing produce two DIFFERENT structures. #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 2, price: #<BigDecimal:ba0fb544,'0.4E1',9(27)>> #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 1, price: #<BigDecimal:ba0d1b04,'0.4E1',9(27)>> cart.line_items guy did not update quantity Code itself (produces LineItem which is then saved in line_item_controller which calls this method) class Cart < ActiveRecord::Base has_many :line_items, dependent: :destroy def add_product(product_id) # LOOK THIS CYCLE BREAKS UNIT TEST, SRSLY, I MEAN IT line_items.each do |item| end current_item = line_items.find_by(product_id: product_id) fresh_price = Product.find_by(id: product_id).price if current_item current_item.quantity += 1 else current_item = line_items.build(product_id: product_id, price: fresh_price) end return current_item end ... Unit test code test "non-unique item added" do cart = Cart.new(:id => 999) line_item0 = cart.add_product(2) line_item0.save line_item1 = cart.add_product(1) line_item1.save assert_equal 2, cart.line_items.size #success line_item2 = cart.add_product(1) line_item2.save assert_equal 2, cart.line_items.size, "what?" assert cart.total_price > 15 #fail, prices are not enough, quantity of product1 = 1 #we get total price from quantity, it's a simple method in model end And once again: IT DOES WORK in browser as it should. Even with cycle. I feel so dumb right now...

    Read the article

  • Things to Avoid in C/C++ [closed]

    - by piemesons
    Possible Duplicate: What C++ pitfalls should I avoid ? While searching for some information, I stumbled upon this series of small articles, Things to avoid in C/C++. So, thought of sharing it... "C/C++ programmers are allowed to do some things they shouldn't. We are given functions that are supposed to be useful but aren't because of hidden faults, or taught ways to do things that are bad, wrong, not necessary. These posts will discuss many of these as time goes on." gets(): http://www.gidnetwork.com/b-56.html fflush(stdin): http://www.gidnetwork.com/b-57.html feof(): http://www.gidnetwork.com/b-58.html system("PAUSE"): http://www.gidnetwork.com/b-61.html scanf: http://www.gidnetwork.com/b-59.html scanf / character: http://www.gidnetwork.com/b-60.html scanf / string: http://www.gidnetwork.com/b-62.html scanf / number: http://www.gidnetwork.com/b-63.html scanf / epilogue: http://www.gidnetwork.com/b-64.html void main(): http://www.gidnetwork.com/b-66.html As this is a very useful subject/topic, I request all the members to keep adding valuable information to this thread, and make it a good source of information for all level of programmers, especially for beginners. Thanks...

    Read the article

  • getting double value from group concact

    - by Sackling
    I am having a problem where I am getting duplicated values from what I think I should be getting. here is my sql: SELECT DISTINCT p.products_image, pd.products_name, p.products_id, p.products_model, p.manufacturers_id, m.manufacturers_name, p.products_price, p.products_sort_order, p.products_tax_class_id, pd.products_viewed, group_concat(p2i.icons_id separator ",") AS icon_ids, group_concat(pi.icon_class separator ",") AS icon_class, IF(s.status, s.specials_new_products_price, NULL) AS specials_new_products_price, IF(s.status, s.specials_new_products_price, p.products_price) AS final_price FROM products p LEFT JOIN specials s ON p.products_id = s.products_id LEFT JOIN manufacturers m ON p.manufacturers_id = m.manufacturers_id JOIN products_description pd ON p.products_id = pd.products_id JOIN products_to_categories p2c ON p.products_id = p2c.products_id INNER JOIN products_specifications ps7 ON p.products_id = ps7.products_id LEFT JOIN products_to_icon p2i ON p.products_id = p2i.products_id LEFT JOIN products_icons pi ON p2i.icons_id = pi.icons_id WHERE p.products_status = '1' AND pd.language_id = '1' AND ps7.specification IN ('Polycotton' , 'Reflective') AND ps7.specifications_id = '7' AND ps7.language_id = '1' AND p2c.categories_id = '21' GROUP BY p.products_id ORDER BY p.products_sort_order The column that is getting double values is icon_ids from the group concact. This seams to happen only if ploycotton, and reflective are both IN ps7.specification. If it is only one or the other then it works fine. The products_to_icon table contains 2 columns, products_id and icons_id. If a product has 2 icons, there are 2 rows so I'm pretty sure it is this fact that is causing the duplicate icons ids. When I run this, the icon_ids column for a product with 2 icons is "4,4,6,6" for example, when what I need is "4,6"

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • limiting mysql results by range of a specific key INCLUDING DUPLICATES

    - by aVC
    I have a query SELECT p.*, m.*, (SELECT COUNT(*) FROM newPhotoonAlert n WHERE n.userIDfor='$id' AND n.threadID=p.threadID and n.seen='0') AS unReadCount FROM posts p JOIN myMembers m ON m.id = p.user_id LEFT JOIN following f ON (p.user_id = f.user_id AND f.follower_id='$id' AND f.request='0' AND f.status='1') JOIN myMembers searcher ON searcher.id = '$id' WHERE ((f.follower_id = searcher.id) OR m.id='$id') AND p.flagged <'5' ORDER BY p.threadID DESC,p.positionID It brings result as expected but I want to add Another CLAUSE to limit the results. Say a sample (minimal shown) set of data looks like this with the above query. threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com 523 745 1 c1.com 435 689 2 d2.com 435 688 1 a4.com 256 345 1 s3.com 164 316 1 f1.com . . I want to get ROWS corresponding to 2 DISTINCT threadIDs starting from MAX, but I want to include duplicates as well. Something like AND p.threadID IN (Select just Two of all threadIDs currently selected, but include duplicate rows) So my result should be threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com

    Read the article

  • Should I Teach My Son Programming? What approaches should I take? [closed]

    - by DaveDev
    I was wondering if it's a good idea to teach object oriented programming to my son? I was never really good at math as a kid, but I think since I've started programming it's given me a greater ability to understand math by being better able to visualise relationships between abstract models. I thought it might give him a better advantage in learning & applying logical & mathematical concepts throughout his life if he was able to take advantage of the tools available to programmers. what would be the best programming fields, techniques and/or concepts? What approach should I take? what concepts should I avoid? what fields of mathematics would he find this benfits him most? He's only 2 now so it wouldn't be for another few years before I do this, (and even at that, only from a very high level point of view). I thought I'd put it to the programming community and see what you guys thought? Possible Duplicate: What are some recommended programming resources for pre-teens?

    Read the article

  • (again) i improve my question [closed]

    - by gcc
    Possible Duplicate: realizing number …how?? i hold input like that A is char pointer A[0]=n A[1]=j A[2]=n // i take (one number)+special char(s)+command(s) (like $ or #) from user A[3]=d // the input order can be changed like char(s)+number+command . // there is one number in A[] . // and every A[i] is important for me because what will i do in next step . // is determined by that input in A[h] or A[n] . // example . // when you see $ go before array do something . // when you see number go farad equation and use it in there A[j]=$ // (number can be pozitif or negatif . A[i]=14(any number) . . int func(int temp) { if(temp=='n') //..do something then return 10; if(temp=='j') return 11; if(temp=='d') return 12; if(/*........*/) // when temp find/realize number ,i wanna return 13; // in if statement, (instead of .....) what code should i write } how i can do } NOTE::please ,dont close my question ,when you close icannot edit it

    Read the article

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Use launchctl to fire an AppleScript script periodically

    - by Daktari
    I have written an AppleScript that lets me back up a particular file. The script runs fine inside AppleScript Editor: it does what it's supposed to do perfectly. So far so good. Now I'd like to run this script at timed intervals. So I use launchctl & .plist to make this happen. That's where the trouble starts. the script is loaded at set interval by launchctl the AppleScript Editor (when open) brings its window (with that script) to the foreground but no code is executed when AppleScript Editor is not running, nothing seems to be happening at all Any ideas as to why this is not working? -- After editing (as per Daniel Beck's suggestions) my plist now looks like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <false/> <key>Label</key> <string>com.opera.autosave</string> <key>ProgramArguments</key> <array> <string>osascript</string> <string>/Users/user_name/Library/Scripts/opera_autosave_bak.scpt</string> </array> <key>StartInterval</key> <integer>30</integer> </dict> </plist> and the AppleScript I'm trying to run: on appIsRunning(appName) tell application "System Events" to (name of processes) contains appName end appIsRunning --only run this script when Opera is running if appIsRunning("Opera") then set base_path to "user_name:Library:Preferences:Opera Preferences:sessions:" set autosave_file to "test.txt" set autosave_file_old to "test_old.txt" set autosave_file_older to "test_older.txt" set autosave_file_oldest to "test_oldest.txt" set autosave_path to base_path & autosave_file set autosave_path_old to base_path & autosave_file_old set autosave_path_older to base_path & autosave_file_older set autosave_path_oldest to base_path & autosave_file_oldest set copied_file to "test copy.txt" set copied_path to base_path & copied_file tell application "Finder" duplicate file autosave_path delete file autosave_path_oldest set name of file autosave_path_older to autosave_file_oldest set name of file autosave_path_old to autosave_file_older set name of file copied_path to autosave_file_old end tell end if

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • Easy Install Resets Hardware Settings

    - by bob5972
    I think there's a problem with the Easy Install setup. I selected "Installer disc image," and it came up and said that it would use "Easy Install," I think I hit next, and then I went back changed my mind, and selected "I will install the operating system later." After finishing the wizard, I think it still went and ran Easy Install, because it auto-installed VMware tools and I never got to select anything for my windows setup. Then, when it was finished, all the hardware changes I had made were lost. The RAM was changed from 2 GB back to the default of 1 GB, my CD ROM drive was set back to "Use a Physical Drive" and "Connect at Power On", and the Floppy Drive was also set to "Connect at Power On", after I disabled them. I was trying to install Windows 7, and I wasn't sure if I could change the RAM settings after installation without needing to reactivate it, so I deleted the VM and tried to start over. This time, the "Installer disc image" had my Win7 image pre-selected, so I clicked "I will install it myself later," set my hardware again, and tried to boot off my CD. Again,it did an Easy Install, and reset my RAM and my drive settings. So I deleted it again and the third time it still had the Win7 image pre-selected, but this time I unplugged all the drives and let it try to boot off the empty harddrive and fail, and made sure it kept all my hardware settings. Then, I powered it off, put my Win7 image in the guest CD ROM, and powered it on. This time it finally let me run the installer and pick my language, and type a user name. This time when I powered it off it kept my hardware settings. I can duplicate the error by doing exactly the steps above. Creating a new VM, selecting my "Installer Image," hitting next, going back, selecting "I will Install it myself," and then finish the wizard, and customizing my hardware right before the end, setting the CD Rom drive to the same installer media, and setting "Connect on Power On." (If I start it without the CD ROM in the first time, it doesn't do it). When I power on, I'm not prompted for any install information (like language and user name), and when it runs the first timemy hardware choices are reset (like the RAM back to 1 GB). If it helps, I'm running VMware Workstation 7.0.1 build-227600 on Gentoo Linux

    Read the article

  • Easy Install Resets Hardware Settings

    - by bob5972
    I think there's a problem with the Easy Install setup. I selected "Installer disc image," and it came up and said that it would use "Easy Install," I think I hit next, and then I went back changed my mind, and selected "I will install the operating system later." After finishing the wizard, I think it still went and ran Easy Install, because it auto-installed VMware tools and I never got to select anything for my windows setup. Then, when it was finished, all the hardware changes I had made were lost. The RAM was changed from 2 GB back to the default of 1 GB, my CD ROM drive was set back to "Use a Physical Drive" and "Connect at Power On", and the Floppy Drive was also set to "Connect at Power On", after I disabled them. I was trying to install Windows 7, and I wasn't sure if I could change the RAM settings after installation without needing to reactivate it, so I deleted the VM and tried to start over. This time, the "Installer disc image" had my Win7 image pre-selected, so I clicked "I will install it myself later," set my hardware again, and tried to boot off my CD. Again,it did an Easy Install, and reset my RAM and my drive settings. So I deleted it again and the third time it still had the Win7 image pre-selected, but this time I unplugged all the drives and let it try to boot off the empty harddrive and fail, and made sure it kept all my hardware settings. Then, I powered it off, put my Win7 image in the guest CD ROM, and powered it on. This time it finally let me run the installer and pick my language, and type a user name. This time when I powered it off it kept my hardware settings. I can duplicate the error by doing exactly the steps above. Creating a new VM, selecting my "Installer Image," hitting next, going back, selecting "I will Install it myself," and then finish the wizard, and customizing my hardware right before the end, setting the CD Rom drive to the same installer media, and setting "Connect on Power On." (If I start it without the CD ROM in the first time, it doesn't do it). When I power on, I'm not prompted for any install information (like language and user name), and when it runs the first timemy hardware choices are reset (like the RAM back to 1 GB). If it helps, I'm running VMware Workstation 7.0.1 build-227600 on Gentoo Linux

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >