Search Results

Search found 298 results on 12 pages for 'billion'.

Page 6/12 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Optimal size for Database partitions

    - by Adrian Mouat
    Hi all, I am creating a very simple, very large Postgresql database. The database will have around 10 billion rows, which means I am looking at partitioning it into several tables. However, I can't find any information on how many partitions we should break it into. I don't know what type of queries to expect as of yet, so it won't be possible to come up with a perfect partitioning scheme, but are there any rules of thumb for partition size? Cheers, Adrian.

    Read the article

  • Flex: Using NumericStepper as an itemEditor in a DataGrid

    - by AlexH
    I am trying to make one field in a datagrid editable with a numeric stepper. My current attempts look like they are working, but the dataProvider is not actually being changed. Based on what I have read in a billion different places, the syntax should be < mx:DataGridColumn dataField="a" itemRenderer="mx.controls.NumericStepper" rendererIsEditor="true" editorDataField="value" / > I have tried several variations on this theme, and nothing seems to work. What am I missing?

    Read the article

  • Integrate gmail in asp.net web application

    - by Zerotoinfinite
    Hi, I am using asp.net 3.5 with C#. I want to integrate gmail into my site, just like any widget so that, people can login there and can access their gmail account. Actually gmail is blocked in my organisation & I want to access it via proxy that I can integrate in my website. Thanks a billion.

    Read the article

  • jQuery check on wordchange rather than "change" trigger

    - by benhowdle89
    I have a input that the user types a search parameter into, at the moment i have it on keyup to do a POST ajax request to a PHP script that returns search results. however its firing off 50 billion (not literally) post requests in about 10 seconds (as the user types) which slows the whole experience down. Can i use jQuery to detect a "wordup" rather than "keyup" by detecting the use of the space bar? Does this make sense?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Beautify Integer values for comparing as strings

    - by Zahra
    Hi. For some reason I need to enter my integer values to database as string, then I want to run a query on them and compare those integers as strings. Is there any way to beautify integer numbers (between 1 and 1 US billion as an example) so I can compare them as strings? Thanks in advance.

    Read the article

  • Parallel computation of the median of a large array

    - by recipriversexclusion
    I got asked this question once and still haven't been able to figure it out: You have an array of N integers, where N is large, say, a billion. You want to calculate the median value of this array. Assume you have m+1 machines (m workers, one master) to distribute the job to. How would you go about doing this? Since the median is a nonlinear operator, you can't just find the median in each machine and then take the median of those values.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Components needed for VPN

    - by Anriëtte Combrink
    Hi there We eventually got our Mac Mini Server. We now want to set up a small Remote Access VPN using this Mac Mini Server. Firstly we are not sure of the components needed additionally to the server to setup this VPN. We currently have the following: 1 Mac Mini Server 1 Firewall Router (Billion 802.11g ADSL2+ router with VPN capabilities [it says so on the box]) 4Mbps ADSL connection (which should have VPN capability enabled by the service provider, or so we heard) We are not sure what else needs to be included to enable our small VPN. Any advice would be really helpful.

    Read the article

  • Components needed for a VPN

    - by Anriëtte Combrink
    Hi there, first of all, I asked this question on SuperUser.com, but no one there could help me. Well, we eventually got our Mac Mini Server. We now want to set up a small Remote Access VPN using this Mac Mini Server. Firstly we are not sure of the components needed additionally to the server to setup this VPN. We currently have the following: 1 Mac Mini Server 1 Firewall Router (Billion 802.11g ADSL2+ router with VPN capabilities [it says so on the box]), we currently use this as our internet connection and resides X.X.0.1 on our network. 4Mbps ADSL connection (Should this line have VPN capability enabled by the service provider?) We are not sure what else needs to be included to enable our small VPN. Any advice would be really helpful?

    Read the article

  • Losing WLAN connections but maintaining internet connections on WIndows 7 Workgroup

    - by Di
    I have 4 computers all running Windows 7 networked in a Work group through Billion 7404vgp-m wireless router.All drivers and firmware for wireless adapters and router are up to date. Windows Firewall and Defender disabled.Disconnected ipv6. Running Nod 32 anti virus software. All have own static IP address 192.XXX.X.XXX. When I Reset the router all computers have Internet and LAN access for about 1 hour and then they will lose the LAN connection but maintain Internet connection. Resetting wireless adapters or restarting computers does nothing to fix this but resetting router will. What is causing this and how do I fix it. Thanks Di

    Read the article

  • VPN Network intermittantly fails to provide internet: What could be possible causes

    - by Jake M
    We have a small office with our own VPN setup. We occasionally experience failures in our internet connection where we cannot access the internet. Most of the time the internet connection will resume by itself(without me doing anything) after a period of time(10 mins). Would you be able to suggest possible causes of the connection failure so I can then go and run some tests? Our network architecture is like so: A 'Billion' brand router that is connected to the internet via phone cable and then connected to our Cisco Switch A Cisco Switch/Bus which is connected to all our office nodes, our external harddrive and also to our router as stated above. All connections are via ethernet cable A series of work computers(nodes) connected via ethernet cable to the Cisco switch. Our ISP is TPG Australia We have a Virtual Private Network All the ethernet cables are about 3 years old Do you think that the causes of our intermittant connection problems could be due to the following: Data collisions in the ethernet cables Old/Faulty ethernet cables Our ISP has bad service Can you think of any other causes of the problem?

    Read the article

  • Losing WLAN connections but maintaining internet connections on WIndows 7 Workgroup

    - by Di
    I have 4 computers all running Windows 7 networked in a Work group through Billion 7404vgp-m wireless router.All drivers and firmware for wireless adapters and router are up to date. Windows Firewall and Defender disabled.Disconnected ipv6. Running Nod 32 anti virus software. All have own static IP address 192.XXX.X.XXX. When I Reset the router all computers have Internet and LAN access for about 1 hour and then they will lose the LAN connection but maintain Internet connection. Resetting wireless adapters or restarting computers does nothing to fix this but resetting router will. What is causing this and how do I fix it. Thanks Di

    Read the article

  • What's constitute an entry on the IPTables and how to find out which client it is originated from

    - by cbd
    I have a Billion BiPac 7700N Modem/Router/Access Point and I connect another router (TP-Link TL-WR1043ND) in wan-bypass mode to extend the wireless coverage. Lately, I noticed that the connection through TP-Link has been dropping out quite regularly. Having read some posts on the Internet, I checked system log on 7700N and found that there are many "nf_conntract: expectation table full" errors, which I suppose the iptables are full. My questions: What does constitute an entry on the iptable? Is it a client or a connection (which means one client can have multiple connections) How could I find out where are those connections originated from? Note: Many reported that the issue is usually related to having torrents running but I don't have any torrents running. Thank you.

    Read the article

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • Can't burn 8.1G iso onto 8.4GB DVD - "Media does not have enough free space"

    - by Max Williams
    I'm trying to burn a dvd on a mac with an external (firewire-connected) dvd drive. I'm checking the size of the iso thus: DVD-4:dvd_files macbook$ ls -l /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8700884992 Aug 22 10:57 /tmp/hybrid.iso DVD-4:dvd_files macbook$ ls -lh /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8.1G Aug 22 10:57 /tmp/hybrid.iso The "human-readable" size is 8.1 Gig but when i try to burn, onto an 8.4G dual-layer dvd, it says "Media does not have enough free space" The definition of a "Gigabyte" according to Wikipedia is 1 billion bytes, so the iso size should actually be 8.7 Gig according to this definition, in which case the disk definitely isn't big enough, and it's just that the -h option to ls is misleading. Is the discrepancy just due to the ls command using a different definition of "G" (eg 1024 Meg aka 1.07 Gig? This comes out as 8.103 which fits what ls is displaying)

    Read the article

  • Losing WLAN connections but maintaining internet connections on WIndows 7 Workgroup

    - by Di
    I have 4 computers all running Windows 7 networked in a Work group through Billion 7404vgp-m wireless router. All drivers and firmware for wireless adapters and router are up to date. Windows Firewall and Defender disabled. Disconnected ipv6. Running Nod 32 anti virus software. All have own static IP address 192.XXX.X.XXX. When I Reset the router all computers have Internet and LAN access for about 1 hour and then they will lose the LAN connection but maintain Internet connection. Resetting wireless adapters or restarting computers does nothing to fix this but resetting router will. What is causing this? How do I fix it?

    Read the article

  • Enterprise Data Center System Admin/Engineer to Server Ratio

    - by Bob
    I know there have been similar questions asked over the last few months however looking at a Data Center Operations and know there are some really smart people out there that might be able to help. Looking for some staffing best practices based on first hand experience and was hoping that there is some experience in this area that can provide "best practice" application: Three High Availability (99.99% plus) Enterprise Level Data Centers geographically dislocated, one manned 24x7x365, one lights out, one co-location running HOT-HOT-HOT supporting a global community. More than 2,000 operating systems consisting of 95% Windows, 5% Linux and Solaris, 45% virtualized, more than 100TB storage. No desktop support, no Network Administration (administrated separately), running N+1 and serving more than 250 Billion page views annually. Based on experience what has been your experience with Server to "Data Center System Administrator/Engineer" ratio? Thanks in advance for your responses.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • EU Research for ICT - Call 7 - biggest ever at € 780 million

    - by trond-arne.undheim
    Under the Digital Agenda for Europe, the Commission has committed to maintaining the pace of a 20% yearly increase of the annual ICT R&D budget at least until 2013. The EU's flagship policy programme calls for doubling of annual public spending on ICT R&D by 2020 and to leverage an equivalent increase in private spending to achieve the goals of Europe's 2020 strategy for jobs and growth. Call 7 is one of the biggest calls ever launched for information and communications technology (ICT) research proposals under the EU's research framework programmes. It will result in project funding of € 780 million in 2011. This funding will advance research on the future internet, robotics, smart and embedded systems, photonics, ICT for energy efficiency, health and well-being in an ageing society, and more. The €780 million call for proposals is part of the biggest ever annual Work Programme under the EU's 7th Framework Programme for Research. Almost €1.2 billion has been budgeted for 2011. €220 million were made available already in July 2010 for public private partnerships focusing on ICT for smart cars, green buildings, sustainable factories and the future internet. Universities, research centres, SMEs, large companies and other organisations in Europe and beyond are eligible to apply for project funding under ICT Call 7. Proposals can be submitted until 18 January 2011, after which they will be evaluated by independent panels of experts for selection on the basis of their quality. Background: Digital Agenda: European Commission announces €780 million boost for strategic ICT research. Call text: ICT Call 7 Deadline: 18/01/2011.

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

  • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API

    - by Roger Brinkley
    Tweet Recorded live at Jfokus 2012, an interview with Greg Luck on JSR 107 Java Temporary Caching API. Joining us this week on the Java All Star Developer Panel is Alexis Moussine-Pouchkine, Java EE Developer Advocate. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JavaOne 2012 call for papers is open (closes April 9th) LightFish, Adam Bien's lightweight telemetry application Java EE 6 sample code JavaFX 1.2 and 1.3 EOL Repeating Annotations in the Works Events March 26-29, EclipseCon, Reston, USA March 27, Virtual Developer Days - Java (Asia Pacific (English)),9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT  / 3.00pm - 7.30pm AEDT April 4-5, JavaOne Japan, Tokyo, Japan April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India Feature Interview Greg Luck founded Ehcache in 2003. He regularly speaks at conferences, writes and codes. He has also founded and maintains the JPam and Spnego open source projects, which are security focused. Prior to joining Terracotta in 2009, Greg was Chief Architect at Wotif.com where he provided technical leadership as the company went from a single product startup to a billion dollar public company with multiple product lines. Before that Greg was a consultant for ThoughtWorks with engagements in the US and Australia in the travel, health care, geospatial, banking and insurance industries. Before doing programming, Greg managed IT. He was CIO at Virgin Blue, Tempo Services, Stamford Hotels and Resorts and Australian Resorts. He is a Chartered Accountant, and spent 7 years with KPMG in small business and insolvency. Mail Bag What’s Cool RT @harkje: To update an earlier tweet: #JavaFX feels like Swing with added convenience methods, better looking widgets, nice effects and...

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >