Search Results

Search found 395 results on 16 pages for 'peak reconstruction wavelength'.

Page 12/16 | < Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

  • Windows 7 starts getting sluggish over a few days

    - by munrobasher
    Myself and the other developer are running Windows 7 Enterprise 64 bit with 8GB RAM on different Gigabyte motherboards with Quad core Intel CPUs. Most of the time, it runs like a dream. We use VMware workstation a lot (hence the 8GB) and that works well. Except... now and then, after the PCs have been on for a few days, the whole system starts getting really sluggish doing certain tasks. The other's developer's system is far worse than mine with it taking up to a minute to launch IE. Today, mine has gone sluggish but nowhere near as bad. For example, normally when I click on a new tab in IE, it's instant. Today, there's an obvious delay. Right-clicking in this window to trigger iSpell is normally instant, right now it takes about five seconds. I've got resource monitor open on my second monitor and when I did that right-click, there was no obvious peak in CPU, disk or memory. A reboot does fix it so it does sound like a resource issue but haven't a clue what might be to blame. The two computers have similarities (same spec) but also differences (like motherboard, RAM & CPU models). So I guess the question is, any pointers on diagnosing why a PC is sluggish? What could cause such a right-click slow down in IE for example? It sounds like such a simple operation. NOTE: whilst typing this message alone, it was fine performance wise. I can click around the page no problem but right-click still is noticeable slow. Will reboot over lunch... Cheers, Rob.

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • Why does Windows/Microsoft Updates always take such a long time to detect available updates?

    - by RLH
    It's a common task for many of us who work in any form of IT position using Windows. Eventually you have to install/re-install a version of Windows and what follows is a very long OS updating process. For a long time I have accepted the fact that this is a slow process and that's all there is to it. There is a lot to download, and some updates require restarts followed by further updates... Ugh! This morning I had to go through the process of installing Windows XP with SP3. I'm installing the OS on a VM on an SSD and I've been working on this thing for over 6 hours. Although, think there are many ways to knit-pick this process for improvements, there is one step that is always particularly slow and I can not figure out a good reason why. That step is the detection step on a manual update. Specifically, when navigate to the Windows (or Microsoft) Updates page, and then click the 'Custom' button to detect your updates. It appears that your PC just sits there for a painful amount of time. Check your Task Manager and it looks like your PC is, in fact, locked because your CPU isn't cooking but that's certainly not the case. Somethings happening but I have no clue what's going on? What is the updating software doing? If the registry was being searched, shouldn't my CPU usage peak? Does anybody know what's happening? I can loosely justify why some of the steps in the update process take so long. However, this one doesn't seem to have any reasoning.

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • Graphics artifacts/distortion with Win7 and nVidia

    - by Gepard
    Problem I encounter is rather hard to describe, so I provide a screenshot: http://dl.dropbox.com/u/1732760/video-distortion.png As you can see there are some horizontal stripes in random colors. These stripes appear sometimes in all windowed apps, games and on the desktop too. They tend to stay in place until I refresh window (or force it to to redraw by for example minimizing and maximizing again). They also tend to appear in the same place and shape multiple times, even if they disappear, it's very likely they will be again in the same place after a while. These artifacts do not blink or change if computer is idling. If I don't touch anything, do not use mouse, they will stay in place forever (unless some app redraws its window on its own). I first encountered this problem some weeks ago. Back then I thought it might be cooling problem, so I took out the graphics card, removed dust from the radiator and fan and put it into PC back. I also ran some stress test using Furmark (peak tempearature was ~65C) to see if the problem becomes more intense if the card gets hotter, but suprisingly no artifacts whatsoever appear during the stress test. Graphic card is Galaxy GeForce 7300 GT with DDR3 memory, was never overclocked. Drivers are the latest, from Nvidia site. OS is Windows 7 64-bit, updated. AMD64 3000+, 2GB RAM. I'm running a dual monitor setup with 2 19'' Samsung LCDs and problem is on both, so I assume it's not a monitor or cable issue.

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx.
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Computers on network crashing

    - by Phil Cross
    We have recently upgraded our network to Windows 7 clients with Windows server 2008 servers. The upgrade was completed by the end of September and until now has been fine (apart from the minor bugs). Recently (within the last 2 weeks) we've notice all computers on the network (around 1000) start to slow down to the point their unusable. It starts at about 08:45 and finishes at 09:15. Because of this, we think something may be broadcasting across the network. This happens every day, between these times. I cant use my computer at all at the slowdown peak, and looking at task managers performance graphs, Physical memory is hovering around 35% and CPU usage is at 0-10% (idle) yet still crashing. I've looked on DHCPs server log and cant see anything which stands out. The only change we made prior to the slowdown was installing adobe CS6 on some computers, however the slowdown affects computers without CS6. We have 2 physical machines, each with around 5-7 virtual machines running on them with ample memory. Does anyone have any suggestions as to what we can do to narrow down whats causing the crashes? Any help, suggestions or advice would be appreciated.

    Read the article

  • System requirements for running windows 8 (basic office use) in virtualbox (ubuntu as host os)

    - by Tor Thommesen
    I want to run windows 8 as a guest os with virtualbox on some thinkpad (haven't bought one yet) running Ubuntu 12.04. Apart from virtualizing windows 8 (mostly just for use with the office suite app) my needs are very modest, I don't need much more than emacs and a browser. What I'd like to know is what kind of specs will be necessary to run windows 8 well as a vm, using the office apps. It would be a shame to waste money on overpowered hardware. Are there any official guidelines from oracle or windows on this? Would this lenovo x220, for example, be sufficiently strong? The specs below were taken from this review. Intel Core i5-2520M dual-core processor (2.5GHz, 3MB cache, 3.2GHz Turbo frequency) Windows 7 Professional (64-bit) 12.5-inch Premium HD (1366 x 768) LED Backlit Display (IPS) Intel Integrated HD Graphics 4GB DDR3 (1333MHz) 320GB Hitachi Travelstar hard drive (Z7K320) Intel Centrino Advanced-N 6205 (Taylor Peak) 2x2 AGN wireless card Intel 82579LM Gigabit Ethernet 720p High Definition webcam Fingerprint reader 6-cell battery (63Wh) and optional slice battery (65Wh) Dimensions: 12 (L) x 8.2 (W) x 0.5-1.5 (H) inches with 6-cell battery Weight: 3.5 pounds with 6-cell battery 4.875 pounds with 6-cell battery and optional external battery slice Price as configured: $1,299.00 (starting at $979.00)

    Read the article

  • JavaOne Latin America 2012 is a wrap!

    - by arungupta
    Third JavaOne in Latin America (2010, 2011) is now a wrap! Like last year, the event started with a Geek Bike Ride. I could not attend the bike ride because of pre-planned activities but heard lots of good comments about it afterwards. This is a great way to engage with JavaOne attendees in an informal setting. I highly recommend you joining next time! JavaOne Blog provides a a great coverage for the opening keynotes. I talked about all the great set of functionality that is coming in the Java EE 7 Platform. Also shared the details on how Java EE 7 JSRs are willing to take help from the Adopt-a-JSR program. glassfish.org/adoptajsr bridges the gap between JUGs willing to participate and looking for areas on where to help. The different specification leads have identified areas on where they are looking for feedback. So if you are JUG is interested in picking a JSR, I recommend to take a look at glassfish.org/adoptajsr and jump on the bandwagon. The main attraction for the Tuesday evening was the GlassFish Party. The party was packed with Latin American JUG leaders, execs from Oracle, and local community members. Free flowing food and beer/caipirinhas acted as great lubricant for great conversations. Some of them were considering the migration from Spring -> Java EE 6 and replacing their primary app server with GlassFish. Locaweb, a local hosting provider sponsored a round of beer at the party as well. They are planning to come with Java EE hosting next year and GlassFish would be a logical choice for them ;) I heard lots of positive feedback about the party afterwards. Many thanks to Bruno Borges for organizing a great party! Check out some more fun pictures of the party! Next day, I gave a presentation on "The Java EE 7 Platform: Productivity and HTML 5" and the slides are now available: With so much new content coming in the plaform: Java Caching API (JSR 107) Concurrency Utilities for Java EE (JSR 236) Batch Applications for the Java Platform (JSR 352) Java API for JSON (JSR 353) Java API for WebSocket (JSR 356) And JAX-RS 2.0 (JSR 339) and JMS 2.0 (JSR 343) getting major updates, there is definitely lot of excitement that was evident amongst the attendees. The talk was delivered in the biggest hall and had about 200 attendees. Also spent a lot of time talking to folks at the OTN Lounge. The JUG leaders appreciation dinner in the evening had its usual share of fun. Day 3 started with a session on "Building HTML5 WebSocket Apps in Java". The slides are now available: The room was packed with about 150 attendees and there was good interaction in the room as well. A collaborative whiteboard built using WebSocket was very well received. The following tweets made it more worthwhile: A WebSocket speek, by @ArunGupta, was worth every hour lost in transit. #JavaOneBrasil2012, #JavaOneBr @arungupta awesome presentation about WebSockets :) The session was immediately followed by the hands-on lab "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". The lab covers JAX-RS 2.0, Jersey-specific features such as Server-Sent Events, and a WebSocket endpoint using JSR 356. The complete self-paced lab guide can be downloaded from here. The lab was planned for 2 hours but several folks finished the entire exercise in about 75 mins. The wonderfully written lab material and an added incentive of Java EE 6 Pocket Guide did the trick ;-) I also spoke at "The Java Community Process: How You Can Make a Positive Difference". It was really great to see several JUG leaders talking about Adopt-a-JSR program and other activities that attendees can do to participate in the JCP. I shared details about Adopt a Java EE 7 JSR as well. The community keynote in the evening was looking fun but I had to leave in between to go through the peak Sao Paulo traffic time :) Enjoy the complete set of pictures in the album:

    Read the article

  • Cocos2d-xna memory management for WP8

    - by Arkiliknam
    I recently upgraded to VS2012 and try my in dev game out on the new WP8 emulators but was dismayed to find out the emulator now crashes and throws an out of memory exception during my sprite loading procedure (funnily, it still works in WP7 emulators and on my WP7). Regardless of whether the problem is the emulator or not, I want to get a clear understanding of how I should be managing memory in the game. My game consists of a character whom has 4 or more different animations. Each animation consists of 4 to 7 frames. On top of that, the character has up to 8 stackable visualization modifications (eg eye type, nose type, hair type, clothes type). Pre memory issue, I preloaded all textures for each animation frame and customization and created animate action out of them. The game then plays animations using the customizations applied to that current character. I re-looked at this implementation when I received the out of memory exceptions and have started playing with RenderTexture instead, so instead of pre loading all possible textures, it on loads textures needed for the character, renders them onto a single texture, from which the animation is built. This means the animations use 1/8th of the sprites they were before. I thought this would solve my issue, but it hasn't. Here's a snippet of my code: var characterTexture = CCRenderTexture.Create((int)width, (int)height); characterTexture.BeginWithClear(0, 0, 0, 0); // stamp a body onto my texture var bodySprite = MethodToCreateSpecificSprite(); bodySprite.Position = centerPoint; bodySprite.Visit(); bodySprite.Cleanup(); bodySprite = null; // stamp eyes, nose, mouth, clothes, etc... characterTexture.End(); As you can see, I'm calling CleanUp and setting the sprite to null in the hope of releasing the memory, though I don't believe this is the right way, nor does it seem to work... I also tried using SharedTextureCache to load textures before Stamping my texture out, and then clearing the SharedTextureCache with: CCTextureCache.SharedTextureCache.RemoveAllTextures(); But this didn't have an effect either. Any tips on what I'm not doing? I used VS to do a memory profile of the emulation causing the crash. Both WP7.1 and WP8 emulators peak at about 150mb of usage. WP8 crashes and throws an out of memory exception. Each customisation/frame is 15kb at the most. Lets say there are 8 layers of customisation = 120kb but I render then onto one texture which I would assume is only 15kb again. Each animation is 8 frames at the most. That's 15kb for 1 texture, or 960kb for 8 textures of customisation. There are 4 animation sets. That's 60Kb for 4 sets of 1 texture, or 3.75MB for 4 sets of 8 textures of customisation. So even if its storing every layer, its 3.75MB.... no where near the 150mb breaking point my profiler seems to suggest :( WP 7.1 Memory Profile (max 150MB) WP8 Memory Profile (max 150MB and crashes)

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • WebCenter Innovation Award Winners

    - by Michael Snow
    Of course, here on our WebCenter blog – we’d like to highlight and brag about our great WebCenter winners. The 2012 WebCenter Innovation Award Winners University of Louisville Location: Louisville, KY, USA Industry: Higher Education Fusion Middleware Products: WebCenter Portal, WebCenter Content, JDeveloper, WebLogic, Oracle BI, Oracle IdM University of Louisville is a state supported research university Statewide Informatics Network to improve public health The University of Louisville has implemented WebCenter as part of the LOUI (Louisville Informatics Institute) Initiative, a Statewide Informatics Network, which will improve public healthcare and lower cost through the use of novel technology and next generation analytics, decision support and innovative outcomes-based payment systems. ---------- News Limited Country/Region: Australia Industry: News/Media FMW Products: WebCenter Sites Single platform running websites for 50% of Australia's newspapers News Corp is running half of Australia's newspaper websites on this shared platform powered by Oracle WebCenter Sites and have overtaken their nearest competitors and are now leading in terms of monthly page impressions. At peak they have over 250 editors on the system publishing in real-time.Sites include: www.newsspace.com.au, www.news.com.au, www.theaustralian.com.au and many others ------ Life Technologies Corp. Country/Region: Carlsbad, CA, USAIndustry: Life SciencesFMW Products: WebCenter Portal, SOA Suite Life Technologies Corp. is a global biotechnology tools company dedicated to improving the human condition with innovative life science products. They were awarded an innovation award for their solution utilizing WebCenter Portal for remotely monitoring & repairing biotech instruments. They deployed WebCenter as a portal that accesses Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired.  The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers.  The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. ----- China Mobile Jiangsu China Mobile Jiangsu is one of the biggest subsidiaries of China Mobile. It has over 25,000 employees and 40 million mobile subscribers. Country/Region: Jiangsu, China Industry: Telecommunications FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM They were awarded an Innovation Award for their new employee platform powered by WebCenter Portal is designed to serve their 25,000+ employees and help them drive collaboration & productivity. JSMCC (Chian Mobile Jiangsu) Employee Enterprise Portal and Collaboration Platform. It is one of the China Mobile’s most important IT innovation projects. The new platform is designed to serve for JSMCC’s 25000+ employees and to help them improve the working efficiency, changing their traditional working mode to social ways, encouraging employees on business collaboration and innovation. The solution is built on top of Oracle WebCenter Portal Framework and WebCenter Spaces while also leveraging Weblogic Server, UCM, OID, OAM, SES, IRM and Oracle Database 11g. By providing rich collaboration services, knowledge management services, sensitive document protection services, unified user identity management services, unified information search services and personalized information integration capabilities, the working efficiency of JSMCC employees has been greatly improved. Main Functionality : Information portal, office automation integration, personal space, group space, team collaboration with web2.0 services, unified search engine for multiple data sources, document management and protection. SSO for multiple platforms. -------- LADWP – Los Angeles Department for Water and Power Los Angeles Department of Water and Power (LADWP) is the largest public utility company in United States with over 1.6 Million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another city department. Country/Region: US – Los Angeles, CA Industry: Public Utility FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM The new infrastructure consists of: Oracle WebCenter Portal including mobile portal Oracle WebCenter Content for Content Management and Digital Asset Management (DAM) Oracle OAM (IDM, OVD, OAM) integrated with AD for enterprise identity management Oracle Siebel for CRM Oracle DB Oracle SOA Suite for integration of various subsystems and back end systems  The new portal's features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser)

    Read the article

  • Do MORE with WebCenter - Webcast Overview & TIES Tour

    - by Michael Snow
    Today's post is from Michelle Huff, Senior Director, Product Management, Oracle WebCenter `````````````````  In case you missed it, I presented on a webcast yesterday focused on how you can “Do More with Oracle WebCenter – Expand Beyond Content Management.” As you may remember, we rebranded Oracle’s Enterprise Content Management (ECM) Suite, which some people knew by the wonderfully techie three-letter acronyms -- UCM, URM & IPM -- to Oracle WebCenter Content last year. Since it’s a unified ECM platform, I’ve seen many customers over the years continue to expand the number of content-centric solutions and application integrations powered by WebCenter throughout their organizations. But, did you know WebCenter also provides portal, collaboration and web experience management capabilities as well? This enables you to leverage your existing investment in the WebCenter platform as well as the information you’re managing to create engaging sites, collaborative spaces, or self-service portals and composite applications. In the webcast I walked through six different ways that you can do more with WebCenter: Collaborative content contribution and sharing environment Share content across intranets and extranets Combine content in composite applications Create targeted online experiences Manage interactive social experiences Optimize multi-channel customer experiences Joining me on the call was Greg Utecht with TIES. TIES is a joint powers cooperative owned by 46 Minnesota school districts, represents 514 schools – and provides software applications, hardware and software, internet service and professional development designed by educators for education. I was having a lot of fun over the past few days talking with Greg about the TIES implementation and future plans with WebCenter. He joined me on the call for a little Q&A to explain how he’s using WebCenter today for their iContent implementation for document management, records management and archiving. And also covered how they have expanded their implementation to create a collaborative space called their HRPay System with WebCenter to facilitate collaboration and to better engage their users within the school districts. During our conversation a few questions came from the audience about their implementation. They were curious to see how the system looked – so let’s take a peak. This first screenshot shows the screen that a human resources or payroll worker in one of our member districts would see upon logging in, based on their credentials and role in their district. This shows the result of clicking on the SUBSCRIBE link on the main page. It allows the user to subscribe to parts of the portal which will e-mail him/her when those are updated in any way. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Resources link. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Finance Advisory link. It shows the discussion threads and document sharing areas. This shows the screen that appears when the forum topic on the preceding screen is clicked. This shows the screen portlet up close with shared documents. This shows the screen that appears when a shared document is clicked on. Note that there is also a download button and an update button, meaning people can work on these collaboratively. If you missed the webcast, check it out! You can watch the replay OnDemand HERE. If you attended the webcast, thanks for joining - I hoped you learned a little from the session. I learned that kids are getting digital report cards today! Wow, have times changed with technology. Uh oh, is this when I start saying “You know, back in my days…?”

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >