Search Results

Search found 6695 results on 268 pages for 'news analysis'.

Page 244/268 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • ajax on parked domain

    - by Daryl
    I'm currently writing this jquery and for some reason (I don't know why) it works on the normal domain, but on the parked domain it doesn't. Normal domain - http://www.thefinishedbox.com Parked domain - http://www.tfbox.com If you scroll down to the colony news and hit the click me link you'll see it will retrieve data via jquery ajax on the Normal domain, but on the parked domain it wont. Here is the jQuery code I have so far (its pretty standard): $(function() { $.ajaxSetup({ cache: false }); var ajax_load = "Load me plz"; // load() functions var loadUrl = "http://thefinishedbox.com/wp-content/themes/tfbox-beta/test.php"; $('.overlay').css({ opacity: '0' }); $('.toggle').click(function() { $('.overlay').css({ display: 'block' }).animate({ opacity: '1' }, 300); $(".overlay .content").html(ajax_load).load(loadUrl); return false; }); $('.close').click(function() { $('.overlay').animate({ opacity: '0' }, 300); $('.overlay').queue(function() { $(this).css({ display: 'none' }); $(this).dequeue(); }); return false; }); I'm a complete noob when it comes to ajax so any help would be massivly appreciated.

    Read the article

  • What's the steps for SQL optimization and changes without reflect live system ?

    - by Space Cracker
    we have a big portal that build using SharePoint 2007 , asp.net 3.5 , SQL Server 2005 .. many developers work in it since 01/2008 and we are now doing huge analysis for current SQL Databases [not share-point DB ] to optimize and enhance it. The main db have about 330 table and 1720 stored procedure (SP) created from 01/2008 till now Many table names / Columns is very long and we want to short it we found SP names is written in 25 format :( , some of them are very complex and also we want to rename many SP parameters need to be renamed one of the biggest table is Registered user table, that will be spitted in more than one table for some optimization, many columns name will be changed I searched for the way that i can rename table names ,columns and i found SQL refactor tool but i still trying it .. my questions : Is SQl Refactor is the best tool for renaming ? or is there any other one ? if i want to make it manually, is there any references or best practice for that ? How can i do such changes in fast and stable way .. i search for recommendations and case studies if exist ?

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • PHP OOP class Sensative To Counter field name !

    - by Mac Taylor
    hey guys recently i managed to write a class for my stories , and everything is fine , unless counter field that stores page's hits here is my class : class nk_posts { var $data = array(); public function _data() { global $db; $result = $db->sql_query(" SELECT bt_stories.*, bt_tags.*, bt_topics.*, group_concat(DISTINCT bt_tags.tag ) as mytags, group_concat(DISTINCT bt_topics.topicname ) as mytopics FROM bt_stories,bt_tags,bt_topics WHERE CONCAT( ' ', bt_stories.tags, ' ' ) LIKE CONCAT( '%', bt_tags.tid, '%' ) AND CONCAT( ' ', bt_stories.associated, ' ' ) LIKE CONCAT( '%', bt_topics.topicid, '%' ) AND time<=now() AND section='news' AND approved='1' GROUP BY bt_stories.sid ORDER BY bt_stories.sid DESC "); while ($this->data = $db->sql_fetchrow($result)) { $this->sid = $this->data['sid']; $this->title = $this->data['title']; $this->counter = $this->data['counter']; //------Rest of Fields ------ $this->_output(); } } public function _output() { themeindex( $this->sid, $this->title, $this->counter, //------Rest of Fields ------ ); } } problem this class can't show counter filed value , but if i change counter field name to other thing like hit , .. everything goes fine im sure its okay if i write it in normal php mysql way , but i need this to be in OOP way any comment why it's sensitive to counter field name ?!

    Read the article

  • Handling Email Bouncebacks in Rails

    - by aressidi
    Hi there, I've built a very basic CRM system for my rails app that allows me to send weekly user activity digests with custom text and create multi-part marketing messages which I can configure and send through a basic admin interface. I'm happy with what I've put together on the send-side of things (minus the fact that I haven't tried to volume test its capabilities), but I'm concerned about how to handle bounce-backs. I came across this plugin with associated scripts: http://github.com/kovyrin/bounces-handler I'm using Google Apps to handle my mail and really don't know enough about Perl to want to mess with the above plugin - I have enough headaches. I'm looking for a simple solution for processing bounce-backs in Rails. All my email will go out from an address like this which will be managed in Google Apps: "[email protected]." What's the best workflow for this? Can anyone post an example solution they're using keeping in mind the fact that I'm using Google Apps for the mail? Any guidance, links, or basic workflow best-practices to handle this would be greatly appreciated. Thanks! -A

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • What is a good solution to log the deletion of a row in MySQL?

    - by hobodave
    Background I am currently logging deletion of rows from my tickets table at the application level. When a user deletes a ticket the following SQL is executed: INSERT INTO alert_log (user_id, priority, priorityName, timestamp, message) VALUES (9, 4, 'WARN', NOW(), "TICKET: David A. deleted ticket #6 from Foo"); Please do not offer schema suggestions for the alert_log table. Fields: user_id - User id of the logged in user performing the deletion priority - Always 4 priorityName - Always 'WARN' timestamp - Always NOW() message - Format: "[NAMESPACE]: [FullName] deleted ticket #[TicketId] from [CompanyName]" NAMESPACE - Always TICKET FullName - Full name of user identified by user_id above TicketId - Primary key ID of the ticket being deleted CompanyName - Ticket has a Company via tickets.company_id Situation/Questions Obviously this solution does not work if a ticket is deleted manually from the mysql command line client. However, now I need to. The issues I'm having are as follows: Should I use a PROCEDURE, FUNCTION, or TRIGGER? -- Analysis: TRIGGER - I don't think this will work because I can't pass parameters to it, and it would trigger when my application deleted the row too. PROCEDURE or FUNCTION - Not sure. Should I return the number of deleted rows? If so, that would require a FUNCTION right? How should I account for the absence of a logged in user? -- Possibilities: Using either a PROC or FUNC, require the invoker to pass in a valid user_id Require the user to pass in a string with the name Use the CURRENT_USER - meh Hard code the FullName to just be "Database Administrator" Could the name be an optional parameter? I'm rather green when it comes to sprocs. Assuming I went with the PROC/FUNC approach, is it possible to outright restrict regular DELETE calls to this table, yet still allow users to call this PROC/FUNC to do the deletion for them? Ideally the solution is usable by my application as well, so that my code is DRY.

    Read the article

  • Embedded Development Board

    - by ALF3130
    I'm new to the embedded development world and am looking to get my very first board. After some research, I realize that there aren't many choices with FPUs. This is important in my project as I'm going to be doing quite a bit of floating point computations. I found the Mini2440 which seems to run on the ARM920T core. This particular unit is perfect for my needs (decent price, all the right I/O ports, and a touch screen to boot) but it seems that it doesn't have an FPU. I don't know how big of a penalty I'd be paying for FP emulation, so I'm unsure of whether to pull the trigger on this one. That said: Can someone please confirm whether this product (Mini2440) has an FPU or not? My project will do image capture and analysis. Does anyone have any experience with running things like OpenMP on such platforms? Please suggest any other similar boards in the = $200 price range that have an FPU. This world is new to me. Any other advice or things I should be aware of is much appreciated.

    Read the article

  • UnknownHostException again!

    - by Nitesh Panchal
    Hello, I posted one question previously and all of them answered that there is some problem with DNS but i changed my DNS to many addressed and now i have the most reliable, google DNS :- 8.8.8.8 Still i get the same UnknownHostException. What can be the problem? This is my code :- DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse("http://rss.news.yahoo.com/rss/india"); Infact if i pass address as something very common like :- http://google.com i still get the same error. Please help me :(. I have my submissions tomorrow. Thanks in advance :) EDIT : If i type the same address in my mozilla, it works great. So, i am sure that there is no DNS problem. 2nd EDIT :- I found this link http://www.ehow.com/how_4747553_fix-unknownhostexception-java-applications-ubuntu.html But when i run the command sudo apt-get install lib32nss-mdns i get package not found. Somebody even mentioned :- -Djava.net.preferIPv4Stack=true But where do i write this statement of Djava? I am using Netbeans 6.8 to run my web application

    Read the article

  • Application crash

    - by Ovi
    I have an application that, as any other app, crashes once in a while for various reasons. When it crashes, it does it gracefully and the users get a nice message of the crash. At the same time the crash is reported on the server for analysis so it can be fixed in future versions. However, I would like that the app keeps working through the crash. What that means is that I would like to run the forms in an 'atomic' way. If it goes down, it doesn't take down the entire app. The users should just need to start over the work done with the particular form. Is this something that can be done through architecture? Or maybe the new framework versions has something to aid this? The application is build mostly in C# over the 3.5 framework, but it also uses some external references, some COMs and web service references. I am not interested in an answer: 'well fix the crashes'. Me and my team and the testing team are working round the clock for this.

    Read the article

  • Help with java threads or executors: Executing several MySQL selects, inserts and updates simmultane

    - by Martin
    Hi. I'm writing an application to analyse a MySQL database, and I need to execute several DMLs simmultaneously; for example: // In ResultSet rsA: Select * from A; rsA.beforeFirst(); while (rsA.next()) { id = rsA.getInt("id"); // Retrieve data from table B: Select * from B where B.Id=" + id; // Crunch some numbers using the data from B // Close resultset B } I'm declaring an array of data objects, each with its own Connection to the database, which in turn calls several methods for the data analysis. The problem is all threads use the same connection, thus all tasks throw exceptios: "Lock wait timeout exceeded; try restarting transaction" I believe there is a way to write the code in such a way that any given object has its own connection and executes the required tasks independent from any other object. For example: DataObject dataObject[0] = new DataObject(id[0]); DataObject dataObject[1] = new DataObject(id[1]); DataObject dataObject[2] = new DataObject(id[2]); ... DataObject dataObject[N] = new DataObject(id[N]); // The 'DataObject' class has its own connection to the database, // so each instance of the object should use its own connection. // It also has a "run" method, which contains all the tasks required. Executor ex = Executors.newFixedThreadPool(10); for(i=0;i<=N;i++) { ex.execute(dataObject[i]); } // Here where the problem is: Each instance creates a new connection, // but every DML from any of the objects is cluttered in just one connection // (in MySQL command line, "SHOW PROCESSLIST;" throws every connection, and all but // one are idle). Can you point me in the right direction? Thanks

    Read the article

  • how to retrieve data from db and display it in list?

    - by raji
    In the below code I am passing catID to db.getNews(catID) super.onCreate(savedInstanceState); setContentView(R.layout.newslist); Bundle extras = getIntent().getExtras(); int catID = extras.getInt("cat_id"); mInflater = (LayoutInflater)getSystemService(Context.LAYOUT_INFLATER_SERVICE); final ListView lv = (ListView) findViewById(R.id.category); lv.setAdapter(new ArrayAdapter<String>(this, R.layout.newslist, db.getNews(catID)){ public View getView(int position, View convertView, ViewGroup parent) { View row; if (null == convertView) { row = mInflater.inflate(R.layout.newslist, null); } else { row = convertView; } TextView tv = (TextView) row.findViewById(R.id.newslist_text); tv.setText(getItem(position)); return row; } }); the getnews(int catid) function is below: public NewsItemCursor getNews(int catID) { String sql = "SELECT title FROM news WHERE catid = " + catID + " ORDER BY id ASC"; SQLiteDatabase d = getReadableDatabase(); NewsItemCursor c = (NewsItemCursor) d.rawQueryWithFactory( new NewsItemCursor.Factory(), sql, null, null); c.moveToFirst(); d.close(); return c; } But I'm getting bug as array adapter undefined... Can anyone help me to resolve this, and retrieve data to make it display in list.

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Setting up SSL on apache on linux ubuntu

    - by ThomasReggi
    I'm trying to get SSL to run on my apache web server. I do not have the DNS for the domain setup yet is that an issue? How do I setup SSL on my web server? When I start apache it fails. root@vannevar:/etc/apache2/ssl# service apache2 start * Starting web server apache2 Action 'start' failed. The Apache error log may have more information. The log stats that it's unable to read the certificate. [Thu Jun 28 15:01:02 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/www.example.com.csr [Thu Jun 28 15:01:02 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Thu Jun 28 15:01:02 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error The contents of /etc/apache2/httpd.conf ServerName [SERVERIP] The contents of /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost [SERVERIP]:443 NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> The contents of /etc/apache2/sites-available/www.example.com <VirtualHost *:80> ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/sites/example.com/public/ ErrorLog /srv/sites/example.com/logs/error.log CustomLog /srv/sites/example.com/logs/access.log combined </VirtualHost> <VirtualHost [SERVERIP]:443> SSLEngine On SSLCertificateFile /etc/apache2/ssl/www.example.com.csr SSLCertificateKeyFile /etc/apache2/ssl/www.example.com.key SSLCACertificateFile /etc/apache2/ssl/comodo.crt ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/sites/example.com/public/ ErrorLog /srv/sites/example.com/logs/error.log CustomLog /srv/sites/example.com/logs/access.log combined </VirtualHost>

    Read the article

  • Mac OS X Lion 10.7.2 update breaks SSL

    - by mcandre
    Summary After updating from 10.7.1 to 10.7.2, neither Safari nor Google Chrome can load GMail. Spinning Beachballs all around. The problem isn't GMail; Firefox loads GMail just fine. The problem isn't limited to Safari or Google Chrome; Other applications also have trouble with SSL: Gilgamesh and Safari. Any program that uses WebKit (Google Chrome, Safari) or a Cocoa library (Gilgamesh) to access the Internet has trouble loading secure sites. The various forums online suggest a handful of fixes, none of which work. Analysis Fix #1: Open Keychain Access.app and delete the Unknown certificate. The 10.7.2 update also prevents Keychain Access from loading. The Keychain program itself Spinning Beachballs. Fix #2: Delete ~/Library/Keychains/login.keychain and /Library/Keychains/System.keychain. This temporarily resolves the issue, and lets you load secure sites, but a minute or two after rebooting or hibernating somehow magically undoes the fix, so you have to delete these files over and over. Fix #3: Delete ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob*. There is a rumor that the new MobileMe/iCloud service ubd is causing the issue. This fix does not resolve the issue. Fix #4: Open Keychain Access, open the Preferences, and disable OCSP and CRL. This fix does not resolve the issue. Fix #5: Use the 10.7.0 - 10.7.2 combo installer, rather than the 10.7.1 - 10.7.2 installer. When I run the combo installer, it stays forever at the "Validating Packages..." screen. The combo installer itself is bugged to He||. I force-quit the installer, ran "sudo killall installd" to force-quit the background installer process, and reran the combo installer. Same problem: it stalls at "Validing Packages..." Recap The only fix that works is deleting the keychains, but you have to do this every time you reboot or wake from hibernate. There is some evidence that ubd continually corrupts the keychain files, but the suggested ubd fix of deleting ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob* does not resolve this issue. Evidently, something is corrupting the keychain over and over and over. Also posted on the Apple Support Communities.

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >