Search Results

Search found 1172 results on 47 pages for 'measure'.

Page 40/47 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Flex component setActualSize

    - by dlots
    I am a little confused about the setActualSize method. It appears from what I've read, that if it is not called on a component by its parent, the component will not be rendered. So it appears that setActualSize is a critical method that is directly bound to rendering the UIComponent. It also appears that the width and height properties of UIComponent override the functionality of the width and height properties of flash.display.DisplayObject, in that they are not directly bound to the rendering of the object but are virtual values that are mainly used by the getExplicitOrMeasured when the parent of the component calls the component's setActualSize method. So the question are: 1) Why isn't the default behavior of every component to just call setActualSize(getExplicitOrMeasuredWidth(),getExplicitOrMeasuredHeight()) on itself? 2) I guess this question stems from the above question and the behavior as I understand it as described above: does setActualSize change the visibility of the component? It appears that that the behavior is that a component is not rendered until setActualSize is called, but if it contains display object children itself (expected behavior as it can calculate measure on itself) and is added to the display list, the only reason why flash isn't rendering it, is because its not visible. 3) Relating to question #1, does updateDisplayList of UIComponent have a method that goes through its children calling setActualSize on each of them?

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • Storing Result set into an array

    - by OVERTONE
    i know this should be simpel and im probably staring straight at the problem but once again im stuck and need the help of the code gurus. im trying too take one row from a column in jdbc, and put them in an array. i do this as follows: public void fillContactList() { createConnection(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { contactListNames[1] = namesList.getString(1); System.out.println("" + contactListNames[1]); } } catch(SQLException q) { } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { } creatConnection is an already defined method that does what it obviously does. i creat my result set while theres another one, i store the string of that column into an array. then print it out for good measure. too make sure its there. the problem is that its storing the entire column into contactListNames[1] i wanted to make it store column1 row 1 into [1] then column 1 row 2 into [2] i know i could do this with a loop. but i dont know too take only one row at a time from a single column. any ideas? p.s ive read the api, i jsut cant see anything that fits.

    Read the article

  • Textblock doesnt get updated when rendered in memory?

    - by veechi
    I have a text block which as part of a custom control.I added the custom control as a child of grid which in turn is added as child of a Canvas.All of these contorl are instantiated in memory and are not rendered on the UI.When I update the value of the TextBlock and emboss the canvas on an image, the updated value doesnt appear on the embossed image.Here is the code snippet:- System.Windows.Controls.Canvas embossCanvas = new System.Windows.Controls.Canvas(); System.Windows.Controls.Grid grid = new Grid(); MyControl myctrl= new MyControl(); int wd = (int)myctrl.ActualWidth; int ht = (int)myctrl.ActualHeight; embossCanvas.Width = wd; embossCanvas.Height = ht; grid.Children.Add(myctrl); embossCanvas.Children.Add(grid); myctrl.txtBlk.UpdateLayout(); grid.UpdateLayout(); embossCanvas.Measure(new System.Windows.Size(embossCanvas.Width, embossCanvas.Height)); embossCanvas.Arrange(new System.Windows.Rect(0, 0, embossCanvas.Width, (int)embossCanvas.Height)); embossCanvas.UpdateLayout(); RenderTargetBitmap renderBmp = new RenderTargetBitmap(wd, ht, 96, 96, System.Windows.Media.PixelFormats.Default); renderBmp.Render(embossCanvas);

    Read the article

  • Why is a non-blocking TCP connect() occasionally so slow on Linux?

    - by pts
    I was trying to measure the speed of a TCP server I'm writing, and I've noticed that there might be a fundamental problem of measuring the speed of the connect() calls: if I connect in a non-blocking way, connect() operations become very slow after a few seconds. Here is the example code in Python: #! /usr/bin/python2.4 import errno import os import select import socket import sys def NonBlockingConnect(sock, addr): while True: try: return sock.connect(addr) except socket.error, e: if e.args[0] not in (errno.EINPROGRESS, errno.EALREADY): raise os.write(2, '^') if not select.select((), (sock,), (), 0.5)[1]: os.write(2, 'P') def InfiniteClient(addr): while True: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock.setblocking(0) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # sock.connect(addr) NonBlockingConnect(sock, addr) sock.close() os.write(2, '.') def InfiniteServer(server_socket): while True: sock, addr = server_socket.accept() sock.close() server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_socket.bind(('127.0.0.1', 45454)) server_socket.listen(128) if os.fork(): # Parent. InfiniteServer(server_socket) else: addr = server_socket.getsockname() server_socket.close() InfiniteClient(addr) With NonBlockingConnect, most connect() operations are fast, but in every few seconds there happens to be one connect() operation which takes at least 2 seconds (as indicated by 5 consecutive P letters on the output). By using sock.connect instead of NonBlockingConnect all connect operations seem to be fast. How is it possible to get rid of these slow connect()s? I'm running Ubuntu Karmic desktop with the standard PAE kernel: Linux narancs 2.6.31-20-generic-pae #57-Ubuntu SMP Mon Feb 8 10:23:59 UTC 2010 i686 GNU/Linux

    Read the article

  • How to get TextWidth of string (without Canvas)?

    - by lyborko
    Hi, I would like to get text width of a string before an application starts. Everything works fine untill Application.MainForm canvas present. The problem is, when I try dynamicaly create TOrdinarium in the OnCreate event of the app. main form, "Canvas does not allow drawing" error occures. (Application.MainForm is nil....). I tried several ways to create Canvas dynamicaly (one of them is written below), but it can not measure text sizes without being attached to parented control. Is there way how to make it work somehow? Thanx I tried this: TOrdinarium = class (TCustomControl) private function GetVirtualWidth:integer; end; constructor TOrdinarium.Create(AOwner:TComponent); begin inherited; Width:=GetVirtualWidth; end; function TOrdinarium.GetVirtualWidth:integer; var ACanvas : TControlCanvas; begin ACanvas := TControlCanvas.Create; TControlCanvas(ACanvas).Control := Application.MainForm; ACanvas.Font.Assign(Font); result:=ACanvas.TextWidth('0'); ACanvas.Free; end;

    Read the article

  • F# How to tokenise user input: separating numbers, units, words?

    - by David White
    I am fairly new to F#, but have spent the last few weeks reading reference materials. I wish to process a user-supplied input string, identifying and separating the constituent elements. For example, for this input: XYZ Hotel: 6 nights at 220EUR / night plus 17.5% tax the output should resemble something like a list of tuples: [ ("XYZ", Word); ("Hotel:", Word); ("6", Number); ("nights", Word); ("at", Operator); ("220", Number); ("EUR", CurrencyCode); ("/", Operator); ("night", Word); ("plus", Operator); ("17.5", Number); ("%", PerCent); ("tax", Word) ] Since I'm dealing with user input, it could be anything. Thus, expecting users to comply with a grammar is out of the question. I want to identify the numbers (could be integers, floats, negative...), the units of measure (optional, but could include SI or Imperial physical units, currency codes, counts such as "night/s" in my example), mathematical operators (as math symbols or as words including "at" "per", "of", "discount", etc), and all other words. I have the impression that I should use active pattern matching -- is that correct? -- but I'm not exactly sure how to start. Any pointers to appropriate reference material or similar examples would be great.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • How can I tell when the Text of a System.Windows.Forms.GroupBox wraps to the next line?

    - by fre0n
    I'm creating a GroupBox at runtime and setting its Text property. Usually, the text is only on one line, but sometimes it wraps. The problem is that the controls contained in the GroupBox cover up the GroupBox's text. What I'd like to do is determine if and when the text wraps. Specifically, I'd like to determine how much extra height the wrapped text takes up as compared to a single line. That way, I can reposition the GroupBox's controls and adjust its height. Initially, I thought I'd do this by calling the GroupBox's CreateGraphics() method, and using the Graphics to measure the string. Something like this: private void SetGroupBoxText(GroupBox grp, string text) { const int somePadding = 10; Graphics g = grp.CreateGraphics(); SizeF textSize = g.MeasureString(text, grp.Font); if (textSize.Width > (grp.Width - somePadding)) { // Adjust height, etc. } } The problem is that the size generated by g.MeasureString(text, grp.Font) doesn't seem to be accurate. I determined that it wasn't accurate by putting enough of a single character to cause a wrap, then measuring the resulting string. For example, it took 86 pipes (|) to until a wrap happened. When I measured that string, its width was ~253. And it took 16 capital W's to force a wrap - its string had a width of ~164. These were the two extremes that I tested. My GroupBox's width was 189. (a's took 29 and had a width of ~180, O's took 22 and had a width of ~189) Does anyone have any ideas? (hacks, WinAPI, etc. are welcome solutions)

    Read the article

  • How to get timestamp of tick precision in .NET / C#?

    - by Hermann
    Up until now I used DateTime.Now for getting timestamps, but I noticed that if you print DateTime.Now in a loop you will see that it increments in descrete jumps of approx. 15 ms. But for certain scenarios in my application I need to get the most accurate timestamp possible, preferably with tick (=100 ns) precision. Any ideas? Update: Apparently, StopWatch / QueryPerformanceCounter is the way to go, but it can only be used to measure time, so I was thinking about calling DateTime.Now when the application starts up and then just have StopWatch run and then just add the elapsed time from StopWatch to the initial value returned from DateTime.Now. At least that should give me accurate relative timestamps, right? What do you think about that (hack)? NOTE: StopWatch.ElapsedTicks is different from StopWatch.Elapsed.Ticks! I used the former assuming 1 tick = 100 ns, but in this case 1 tick = 1 / StopWatch.Frequency. So to get ticks equivalent to DateTime use StopWatch.Elapsed.Ticks. I just learned this the hard way. NOTE 2: Using the StopWatch approach, I noticed it gets out of sync with the real time. After about 10 hours, it was ahead by 5 seconds. So I guess one would have to resync it every X or so where X could be 1 hour, 30 min, 15 min, etc. I am not sure what the optimal timespan for resyncing would be since every resync will change the offset which can be up to 20 ms.

    Read the article

  • How do I fix the alpha value after calling GDI text functions?

    - by Daniel Stutzbach
    I have a application that uses the Aero glass effect, so each pixel has an alpha value in addition to red, green, and blue values. I have one custom-draw control that has a solid white background (alpha = 255). I would like to draw solid text on the control using the GDI text functions. However, these functions set the alpha value to an arbitrary value, causing the text to translucently show whatever window is beneath my application's. After calling rendering the text, I would like to go through all of the pixels in the control and set their alpha value back to 255. What's the best way to do that? I haven't had any luck with the BitBlt, GetPixel, and SetPixel functions. They appear to be oblivious to the alpha value. Here are other solutions that I have considered and rejected: Draw to a bitmap, then copy the bitmap to the device: With this approach, the text rendering does not make use of the characteristics of the monitor (e.g., ClearText). Use GDI+ for text rendering: This application originally used GDI+ for text rendering (before I started working on Aero support). I switched to GDI because of difficulties I encountered trying to accurately measure strings with GDI+. I'd rather not switch back. Set the Aero region to avoid the control in question: My application's window is actually a child window of a different application running in a different process. I don't have direct control over the Aero settings on the top-level window. The application is written in C# using Windows Forms, though I'm not above using Interop to call Win32 API functions.

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • How can you determine the file size in JavaScript?

    - by Daniel Lew
    I help moderate a forum online, and on this forum we restrict the size of signatures. At the moment we test this via a simple Greasemonkey script I wrote; we wrap all signatures with a <div>, the script looks for them, and then measures the div's height and width. All the script does right now is make sure the signature resides in a particular height/width. I would like to start measuring the file size of the images inside of a signature automatically so that the script can automatically flag users who are including huge images in their signature. However, I can't seem to find a way to measure the size of images loaded on the page. I've searched and found a property special to IE (element.fileSize) but I obviously can't use that in my Greasemonkey script. Is there a way to find out the file size of an image in Firefox via JavaScript? Edit: People are misinterpreting the problem. The forums themselves do not host images; we host the BBCode that people enter as their signature. So, for example, people enter this: This is my signature, check out my [url=http://google.com]awesome website[/url]! This image is cool! [img]http://image.gif[/img] I want to be able to check on these images via Greasemonkey. I could write a batch script to scan all of these instead, but I'm just wondering if there's a way to augment my current script.

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • Is it okay to rely on javascript for menu layout?

    - by Ryan
    I have a website template where I do not know the number of menu items or the size of the menu items that will be required. The js below works exactly the way I want it to, however this is the most js I've every written. Are there any disadvantages or potential problems with this method that I'm not aware of because I'm a js beginner? I'm currently manually setting the padding for each site. Thank you! var width_of_text = 0; var number_of_li = 0; // measure the width of each <li> and add it to the total with, increment li counter $('li').each(function() { width_of_text += $(this).width(); number_of_li++; }); // calculate the space between <li>'s so the space is equal var padding = Math.floor((900 - width_of_text)/(number_of_li - 1)); // add the padding the all but the first <li> $('li').each(function(index) { if (index !== 0) { $(this).css("padding-left", padding); } });

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • What's the best way to link formal specs to JIRA enchancement requests?

    - by Adam
    What's the best way to link formal specs to JIRA enhancement requests? I want to track changes to specifications using JIRA. Ideally, I'd like to refer to a functional ID reference in a JIRA ticket (e.g. MYAPPAPPROVAL LOGICMAIN SCREEN), so that program managers can retrospectively categorise defects. The reason for this, is so that QA scripts and documentation tickets can be searched/categorised meaningfully in the tracking system. There seems to be a million possible ways to do this, e.g. should I write a custom component to select functional IDs from a tree? should I write the specs in confluence, or another CMS with a TrackBack facility? should I include a link to the documentation URL? should I use some other 3rd party plugin application? should I use some Atlassian application that i'm unaware of? am I using the wrong tracking tool/process to measure spec growth? What's the best way, in your experience?

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Long story short, here is what it says now: Existing Windows 2000 Server, type:built-in [no licenses used, I add for for sake of being complete] Windows Server 2003 - Terminal Server Per Device CAL Token, type:open [none of 10 used] Windows Server 2003 - TS Per Device CAL, type:open [3 of 10 used] As I tried to explain, this is the end result after a few changes, most of which I can't directly connect to any action from my part. Only going to the activation procedure by phone seemed to directly effect the TS, resulting in the above configuration. Still, it is impossible to connect with more than 3 people, which is 1 up from the 2 that could connect yesterday. TS does say 7 licenses are avaible. Yet it won't give them out.

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses (solved)

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Update 2 (4/12/10) The problem has been solved. It seems the activation over the web, while it said to have succeeded, did not work correctly. After activating by phone, it did work. What was different from the old setup and what put me on the wrong foot from that moment, was that I now need to create seperate user account because a session with one user account will be taken over by someone else when that account is used by that person. On the previous server, it was possible to open several sesions with the same account. We now use Per Device licenses, I'm not sure what was used before. Thanks all for the replies.

    Read the article

  • How to calculate CPU % based on raw CPU ticks in SNMP

    - by bjeanes
    According to http://net-snmp.sourceforge.net/docs/mibs/ucdavis.html#scalar_notcurrent ssCpuUser, ssCpuSystem, ssCpuIdle, etc are deprecated in favor of the raw variants (ssCpuRawUser, etc). The former values (which don't cover things like nice, wait, kernel, interrupt, etc) returned a percentage value: The percentage of CPU time spent processing user-level code, calculated over the last minute. This object has been deprecated in favour of 'ssCpuRawUser(50)', which can be used to calculate the same metric, but over any desired time period. The raw values return the "raw" number of ticks the CPU spent: The number of 'ticks' (typically 1/100s) spent processing user-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors). My question is: how do you turn the number of ticks into percentage? That is, how do you know how many ticks per second (it's typically — which implies not always — 1/100s, which either means 1 every 100 seconds or that a tick represents 1/100th of a second). I imagine you also need to know how many CPUs there are or you need to fetch all the CPU values to add them all together. I can't seem to find a MIB that gives you an integer value for # of CPUs which makes the former route awkward. The latter route seems unreliable because some of the numbers overlap (sometimes). For example, ssCpuRawWait has the following warning: This object will not be implemented on hosts where the underlying operating system does not measure this particular CPU metric. This time may also be included within the 'ssCpuRawSystem(52)' counter. Some help would be appreciated. Everywhere seems to just say that % is deprecated because it can be derived, but I haven't found anywhere that shows the official standard way to perform this derivation. The second component is that these "ticks" seem to be cumulative instead of over some time period. How do I sample values over some time period? The ultimate information I want is: % of user, system, idle, nice (and ideally steal, though there doesn't seem to be a standard MIB for this) "currently" (over the last 1-60s would probably be sufficient, with a preference for smaller time spans).

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • GRUB 2 freezing at OS selection screen, what could be the cause?

    - by Michael Kjörling
    Mains power is somewhat unreliable where I live, so every now and then, the computer gets rebooted when the PSU can't maintain proper voltage during a brown-out or momentary black-out. It's happened a few times recently that when power is restored, the BIOS POST completes successfully, GRUB starts to load and then freezes. I've seen this at the Welcome to GRUB! message, but it seems to happen more often just past the switch to the graphical OS list. At this point, the computer will not respond to anything (arrow keys, control commands, Ctrl+Alt+Del, ...) - it simply sits there displaying this image, seemingly doing nothing more. At that point, turning the computer off using the power button and letting it sit for a while (cooling down?) has allowed it to boot successfully. Turning the computer off and immediately back on seems to give the same result (successful POST then freeze in GRUB). This behavior began recently, although does not seem to be directly correlated with my hard disk woes (although it may be relevant that GRUB resides on that physical disk, I don't know). Once the computer has booted, it runs without a hitch. I know that a "proper" solution would be to invest in a UPS, but what might be causing behavior like this? I was thinking in terms of perhaps the CPU shutting down as a thermal control measure, but if that was the cause then wouldn't I see similar freezes during use (which I do not)? What else could cause freezes apparently closely but not perfectly related to the BIOS handover from POST to OS bootloader? The BIOS settings are to reset to previous power status after a power loss. Since the PC in question is almost always turned on, this means restore to full power status. I have no expansion cards installed that make any BIOS extensions known by screen output during the boot process, at least, but I do have a few expansion cards installed. Haven't made any changes in that regard in a long time, now. I haven't touched GRUB itself for a long time, whether configuration or binaries, so I don't think that's the problem. Also, it doesn't really make sense that a bug in GRUB would manifest itself only once in a blue moon but significantly more often after a power failure.

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >