Search Results

Search found 6177 results on 248 pages for 'reputation points'.

Page 191/248 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • Need help with chronic slow wifi connections

    - by mgeorge
    I have had chronic slow wifi connections on several networks for a while now, I think since my update to 12.04 when it came out. I have tried many of the tips and tricks already available out there in the forums with no luck (wicd, etc..). I want to see if any of you experts out there might be able to help me, and thanks in advance!! I use ubuntu 12.04 on a lenovo ideapad y650, and most networks I connect to lose the connection frequently or do not give appropriate bandwidth when I am connected. Here are some results of the usual go-to system checks: cat /etc/lsb-release; uname -a: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Linux mgeorge-lenovo 3.2.0-48-generic-pae #74-Ubuntu SMP Thu Jun 6 20:05:01 UTC 2013 i686 i686 i386 GNU/Linux lspci -nnk | grep -iA2 net: 04:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlwifi -- 08:00.0 Ethernet controller [0200]: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe [14e4:1698] (rev 10) Subsystem: Lenovo Device [17aa:3878] Kernel driver in use: tg3 lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 090c:7371 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) iwconfig: lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"Dyno" Mode:Managed Frequency:2.412 GHz Access Point: CC:5D:4E:46:0A:93 Bit Rate=150 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-40 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:948 Invalid misc:728 Missed beacon:0 eth0 no wireless extensions. rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no lsmod: Module Size Used by uvcvideo 67203 0 videodev 86588 1 uvcvideo nouveau 712674 3 ttm 65344 1 nouveau drm_kms_helper 45466 1 nouveau drm 197641 5 nouveau,ttm,drm_kms_helper i2c_algo_bit 13199 1 nouveau mxm_wmi 12893 1 nouveau wmi 18744 1 mxm_wmi joydev 17393 0 arc4 12473 2 snd_hda_codec_realtek 174313 1 snd_hda_codec_hdmi 31775 1 snd_hda_intel 32719 3 snd_hda_codec 109562 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80916 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi psmouse 86520 0 snd_seq 51592 2 snd_seq_midi,snd_seq_midi_event serio_raw 13027 0 snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq iwlwifi 366509 0 mac80211 436493 1 iwlwifi snd 62218 16 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,sn d_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 17890 0 sparse_keymap 13658 1 ideapad_laptop cfg80211 178877 2 iwlwifi,mac80211 ir_lirc_codec 12739 0 lirc_dev 18700 1 ir_lirc_codec soundcore 14635 1 snd snd_page_alloc 14108 2 snd_hda_intel,snd_pcm ir_mce_kbd_decoder 12681 0 ir_sony_decoder 12462 0 ir_jvc_decoder 12459 0 ir_rc6_decoder 12459 0 ir_rc5_decoder 12459 0 rc_rc6_mce 12454 0 ir_nec_decoder 12459 0 video 19115 1 nouveau ene_ir 18019 0 rc_core 21263 10 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_rc6_mce,ir_nec_decoder,ene_ir bnep 17830 2 rfcomm 38139 0 parport_pc 32114 0 bluetooth 158479 10 bnep,rfcomm ppdev 12849 0 binfmt_misc 17292 1 mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp tg3 141414 0 nm-tool: NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: tg3 State: unavailable Default: no HW Address: 00:23:5A:CC:85:BD Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 [Dyno] -------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: yes HW Address: 00:22:FA:D0:94:CA Capabilities: Speed: 150 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) *Dyno: Infra, CC:5D:4E:46:0A:93, Freq 2412 MHz, Rate 54 Mb/s, Strength 81 WPA2 IPv4 Settings: Address: 10.0.0.43 Prefix: 24 (255.255.255.0) Gateway: 10.0.0.1 DNS: 10.0.0.1

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Brazil is Hot for Social Media

    - by Mike Stiles
    Today’s guest blog is from Oracle SVP Product Development Reggie Bradford, fresh off a visit to Sao Paulo, Brazil where he spoke at the Dachis Social Business Summit and spent some time getting a personal taste for the astonishing growth of social in Brazil, both in terms of usage and engagement. I knew it was big, but I now have an all-new appreciation for why the Wall Street Journal branded Brazil the “social media capital of the universe.” Brazil has the world’s 5th largest economy, an expanding middle class, an active younger demo market, a connected & outgoing culture, and an ongoing embrace of the social media platforms. According to comScore's 2012 Brazil Digital Future in Focus report, 97% are using social media, and that’s not even taking mobile-only users into account. There were 65 million Facebook users in 2012, spending an average 535 minutes there, up 208%. It’s one of Twitter’s fastest growing markets and the 2nd biggest market for YouTube. Instagram usage has grown over 300% since last year. That by itself is exciting, but look at the opportunity for social marketing brands. 74% of Brazilian social users follow brands on Facebook, and 59% have praised a company on either Twitter or Facebook. A 2011 Oh! Panel study found 81% of social networkers there used social to research new products and 75% went there looking for discounts. B2C eCommerce sales in Brazil is projected to hit $26.9 billion by 2015. I bet I’m not the only one who sees great things ahead, and I was fortunate enough give a keynote ABRADI, an association of leading digital agencies in Brazil with 53 execs from 35 agencies attending. I was also afforded the opportunity to give my impressions of what’s going on in Brazil to Jornal Propoganda & Marketing, one of the most popular publications in Latin America for marketers. I conveyed that especially in an environment like Brazil, where social users are so willing to connect and engage brands, marketers need to back away from the heavy-handed, one-way messaging of old school advertising and move toward genuine relationships and trust-building. To aide in this, organizational and operation changes must be embraced inside the enterprise. We've talked often about the new, tighter partnership forming between the CIO and CMO. If this partnership is not encouraged, fostered and resourced, the increasing amount of time consumers spend on mobile and digital, and the efficiencies and integrations offered by cloud-based software cannot be exploited. These are the kinds of changes that can yield social data that, when combined with enterprise data, helps you come to know your social audiences intimately and predict their needs. Consumers are always connected and need your brand to be accessible at any time, be it for information or customer service. And, of course, all of this is happening quite publicly. The holistic, socially-enable enterprise connects social to customer service systems and all other customer touch points, facilitating the kind of immediate, real-time, gratifying response customers are coming to expect. Social users in Brazil are highly active and clearly willing to meet us as brands more than halfway. Empowering yourself with a social management technology platform will have you set up to maximize this booming social market…from listening & monitoring to engagement to analytics to workflow & automation to globalization & language support. Brands, it’s time to be as social as the great people of Brazil are. Obrigado! @reggiebradfordPhoto: Gualberto107, freedigitalphotos.net

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • D&rsquo;Arcy&rsquo;s Book Club - The New Strategic Selling

    - by D'Arcy Lussier
    The New Strategic Selling Miller and Heiman Amazon.ca Amazon.com Chapters Everybody is a salesmen. Every day, without knowing it, we sell something to someone. Now, the typical vision people think of when they hear the word “sales” is the sleazy used car salesperson who does whatever they can to get you to buy the clunker on their lot. But selling is not an action tied to money and products. Selling is about convincing people to see your point of view and act on it. If you want your company to cover a trip to a conference, you may have to sell the idea to your boss. If you want to buy that new big screen TV, you have to sell the idea to your significant other. If you want to go on a weekend fishing trip with the boys you might be called in to help sell the idea to your buddies wife. We all sell, but we don’t all sell very well. So enter The New Strategic Selling, a book based on the sales course put on by the Miller-Heiman group. In fact, this isn’t really a “New” strategy to selling as its been around for a number of years. But the concepts they present, the ideas about selling, these are still very radical based on what most of us have experienced. Gone are the high pressure, win at all cost, GlenGarry-GlenRoss style of sales…instead the book presents a framework to switch to need-based selling. It’s the idea that instead of going in raving about a product or service, you build a relationship where the buyer expresses what their needs are and your response is to present a solution that best fits that need. Instead of focussing on the amount of money you can squeeze out of a client, you focus on whether everyone wins, that they receive win-results from the engagement, that repeat business is developed over time delivering value over and over again. The great thing about the book is that what it teaches…things like how to identify different buying influencers, how to prepare for meetings, techniques to solicit information about what the buyer is really thinking/feeling…these things are entirely applicable in *any* situation that you need to sell to someone…and remember: selling is convincing people to see your point of view and act on it. So that new big screen TV you want to buy but need to convince your wife on? This book can help you. That training opportunity you want your company to send you on? This book can help you. The upgrade to your community park that you want to lobby the local civic authorities for? This book can help you. The book is a bit wordy. I found that the length could have been reduced and the points still have gotten across. That’s really the only knock that I have though; the insight that it provides is so worthwhile that having to chew through extra words is well worth it. You definitely don’t have to be a professional salesperson to benefit from this book. Rating: 4/5

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • How to place rooms proceduraly (rule based) on in a game word

    - by gardian06
    I am trying to design the algorithm for my level generation which is a rule driven system. I have created all the rules for the system. I have taken care to insure that all rooms make sense in a grid type setup. for example: these rooms could make this configuration The logic flow code that I have so far Door{ Vector3 position; POD orient; // 5 possible values (up is not an option) bool Open; } Room{ String roomRule; Vector3 roomPos; Vector3 dimensions; POD roomOrient; // 4 possible values List doors<Door>; } LevelManager{ float scale = 18f; List usedRooms<Room>; List openDoors<Door> bool Grid[][][]; Room CreateRoom(String rule, Vector3 position, POD Orient){ place recieved values based on rule fill in other data } Vector3 getDimenstions(String rule){ return dimensions of the room } RotateRoom(POD rotateAmount){ rotate all items in the room } MoveRoom(Room toBeMoved, POD orientataion, float distance){ move the position of the room based on inputs } GenerateMap(Vector3 size, Vector3 start, Vector3 end){ Grid = array[size.y][size.x][size.z]; Room floatingRoom; floatingRoom = Room.CreateRoom(S01, start, rand(4)); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } // fill used grid spaces floatingRoom = Room.CreateRoom(S02, end, rand(4); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } Vector3 nRoomLocation; Door workingDoor; string workingRoom; // fill used grid spaces // pick random door on the openDoors list workingDoor = /*randomDoor*/ // get a random rule nRoomLocation = workingDoor.position; // then I'm lost } } I know that I have to make sure for convergence (namely the end is reachable), and to do this until there are no more doors on the openDoors list. right now I am simply trying to get this to work in 2D (there are rules that introduce 3D), but I am working on a presumption that a rigorous algorithm can be trivially extended to 3D. EDIT: my thought pattern so far is to take an existing open door and then pick a random room (restrictions can be put in later) place that room's center at the doors location move the room in the direction of the doors orientation half the rooms dimension w/respect to that axis then test against the 3D array to see if all the grid points are open, or have been used, or if there is even space to put the room (caseEdge) if caseEdge (which can also occur in between rooms) then put the door on a toBeClosed list, and remove it from the open list (placing a wall or something there). then to do some kind of test that both the start, and the goal are connected, and reachable from each other (each room has nodes for AI, but I don't want to "have" to pull those out to accomplish this). but this logic has the problem for say the U, or L shaped rooms in my example, and then I also have a problem conceptually if the room needs to be rotated.

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • Is there any kind of established architecture for browser based games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that? EDIT: After reading the first answers I am under the impression of having made a mistake when including the "MMO" part into the title. The game will not be all fancy (i.e. 3D or such) on the client side and the logic will completely exist on the server. That is, apart from basic form validation for the user which will also be mirrored on the server side. So the target toolset will be HTML5, JavaScript, probably JQuery(UI). My question is more related to the software architecture/design of a system that enforces certain rules. Separation of ruleset and presentation One problem I am having is that I want to separate the game rules from the presentation. The first step would be to make an own module for the game "engine" that only exposes an interface that allows all actions to be taken in a clean way. If an action fails with regard to some pre/post condition, the engine throws an exception which is then presented to the user like "you cannot sell something you do not own" or "after that you would end up in a situation which is not a valid game state." The problem here is that I would like to be able to not even present invalid action in the first place or grey out the corresponding UI elements. Changing and tweaking the ruleset Another big thing is the ruleset. It will probably evolve over time and most definitely must be tweaked. What's more, it should be possible (to a certain extent) to build a ruleset that fits a specific game round, i.e. choosing different kinds of behaviours in different aspects of the game. This would do something like "we play it with extension A today but we throw out extension B." For me, this screams "Architectural/Design pattern" but I have no idea on who might have published on something like this, not even what to google for.

    Read the article

  • I need help with 2D collision response (of stacking rotating polygons, with friction and gravity, for a game)

    - by Register Sole
    Hi I am looking for suggestions on how to write a collision response for game programming purpose (so not a scientific simulation). I am dealing with 2D polygons that are rotating, and I want them to be able to stack. I also want friction and gravity. I have a detection mechanism that returns the separating axis, how long the polygons are overlapping, and up to 2 points of contact. For the response, I am currently using an impulse-based response, which main idea is: find the separating axis, length of overlap, and the point of contact (if there are two, pick a random point between to simulate averaged force. i believe there are better ways than this) separate the object (modifying their positions, taking into account of their masses. i do not separate them completely though, to keep track that they are colliding to reduce jitter) calculate normal force based on the coefficient of restitution as if there is no friction. calculate friction, as if there is no normal force. I also assume that the direction of the friction is the same throughout the collision. apply the two forces (which result in a rather inaccurate result, since each force is calculated as if the other is not present. for non-rotating bodies though, this method is exact) I am aware that this method requires the coefficient of friction to be sufficiently small due to the assumption that the direction of friction stays the same in a collision. Also, the result is visually satisfying if gravity is not present. However, when there is gravity, objects on ground jitter and drift (even with zero coefficient of restitution)! It also happens for stacking objects. Larger coefficient of restitution and gravity increase the jittering. I hope you can help me with this. Some things i would like to know more about is how to handle collision with two point of contacts (how to end up having an object sitting still on the ground?), how to reduce, and prevent if possible, jitter and drift (do people use the most accurate method possible, or is there a trick to overcome this?), and how to handle multiple objects collision (for example, in the case of stacking objects, how do I check collisions between all of them and keep them all stable at every frame so they don't jitter?). A total reformulation of my algorithm is also welcomed, as long as it works. Another thing to note is that I am not making a Physics game, so I only need a visually satisfying response (though a realistic response is preferable, if it is not performance-heavy). But surely jittering and drifting objects on flat ground are not at all acceptable. In addition, I am a Physics student, so feel free to talk about impulse and whatever needed. Finally, I'm sorry for the long post. I tried to be as concise as I can. Thank you for reading it! EDIT It seems what I didn't manage to come up all this time is to separate resting contact as a class of its own and how to solve them. Currently reading the paper suggested by Jedediah. More suggestions on the topic are welcome :) CASE CLOSED After reading various papers referenced in the paper, successfully implemented simultaneous impulse method (referring to the original paper by Erin Catto, [Catt05]). Thanks maaaan!! The paper is wonderful. The current system is visibly much better than the previous. Still haven't separated resting contact as a class of its own though, which brings me to my next question. Love you all! Haha (sorry, I'm just so happy thanks to you).

    Read the article

  • primefaces p:commandButton action not redirecting when in a p:dialog

    - by John
    I have a command button with an action that returns a new URL to redirect to. When this command button is not inside the p:dialog all works as I would expect, but when I put the command button in the dialog then the dialog just closes and leaves me on the same page. <h:form id="reviewForm"> <h:messages id="saveMsg" showDetail="true" globalOnly="true"/> <p:panel style="text-align: center"> <p:commandButton type="button" value="Submit" onclick="showOptions(); areYouSureVar.show();" /> </p:panel> <p:dialog id="areYouSure" widgetVar="areYouSureVar" resizable="false" width="400" modal="true" header="Are you sure you want to continue?"> <p:panel style="text-align: center"> <p:commandButton id="submit" ajax="false" value="Submit" action="#{mybean.myaction}" oncomplete="areYouSureVar.hide();"/> <p:commandButton type="button" value="Cancel" onclick="areYouSureVar.hide();"/> </p:panel> </p:dialog> </h:form> mybean.action returns a string that points to another page, but that page never loads, the dialog goes away and that is all.

    Read the article

  • How to improve WinForms MSChart performance?

    - by Marcel
    Hi all, I have created some simple charts (of type FastLine) with MSChart and update them with live data, like below: . To do so, I bind an observable collection of a custom type to the chart like so: // set chart data source this._Chart.DataSource = value; //is of type ObservableCollection<SpectrumLevels> //define x and y value members for each series this._Chart.Series[0].XValueMember = "Index"; this._Chart.Series[1].XValueMember = "Index"; this._Chart.Series[0].YValueMembers = "Channel0Level"; this._Chart.Series[1].YValueMembers = "Channel1Level"; // bind data to chart this._Chart.DataBind(); //lasts 1.5 seconds for 8000 points per series At each refresh, the dataset completely changes, it is not a scrolling update! With a profiler I have found that the DataBind() call takes about 1.5 seconds. The other calls are negligible. How can I make this faster? Should I use another type than ObservableCollection? An array probably? Should I use another form of data binding? Is there some tweak for the MSChart that I may have missed? Should I use a sparsed set of date, having one value per pixel only? Have I simply reached the performance limit of MSCharts? From the type of the application to keep it "fluent", we should have multiple refreshes per second. Thanks for any hints!

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • xcode linker error on iPhone app (Only on simulator)

    - by RexOnRoids
    Im getting this linker error that won't let me compile. It only happens on the simulator. KEY POINTS: - Happens only in simulator - Similar to THIS question, but found no FRAMEWORK_SEARCH_PATHS in my .pbxproj file - Though my OS is 10.6.2, I had to build target 1.5 to avoid other linker errors - libxml2.dylib IS required and is in my Frameworks group - The other cited libraries I have never heard of. - Tried bringing in those other Libs under frameworks, didn't solve. Build SpaceTweet of project SpaceTweet with configuration Debug Ld build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet normal i386 cd "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)" setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -L/Users/Scott/Desktop "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/../../libYAJLIPhone-0" -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.0.sdk/usr/lib "-F/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -filelist "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/SpaceTweet.build/Debug-iphonesimulator/SpaceTweet.build/Objects-normal/i386/SpaceTweet.LinkFileList" -mmacosx-version-min=10.5 -framework Foundation -framework UIKit -framework CoreGraphics -framework AVFoundation -framework MessageUI -lYAJLIPhone -lxml2 -o "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet" ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libxml2.dylib, missing required architecture i386 in file ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libSystem.dylib, missing required architecture i386 in file ld: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libobjc.A.dylib, missing required architecture i386 in file collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1 CLUE: Again, MY question is very similar to THIS SOLVED QUESTION except that in my case I did NOT find a FRAMEWORK_SEARCH_PATHS entry in the .pbxproj file in my project bundle and thus could not solve in the manner in which that question was solved.

    Read the article

  • TeamCity stopped working once I added NUnit to the mix

    - by Dave
    I'm struggling a lot trying to get our build server going. I am currently running tests in a Windows XP virtual machine, and have installed TeamCity v5.0.3, build 10821. I am using NUnit v2.5.3. I finished the initial setup with TeamCity without any issues at all, provided that I use the sln2008 build runner that makes the entire process almost brainless. It's really quite nice that way, and very satisfying to see your first successful automated build. Now it's time to kick it up a notch and I wanted to get NUnit working. I keep the NUnit 2.5.3 assemblies in an external libs folder in SVN, so I checked that out onto the test system. I selected NUnit 2.5.3 from the build runner options, as the online instructions had recommended. But when I build, I get the following error: Window1.xaml.cs(14,7): error CS0246: The type or namespace name ‘NUnit’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘Test’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘TestAttribute’ could not be found (are you missing a using directive or an assembly reference?) Everything compiles great in the IDE. From finding blog posts and submitting comments, I got some advice and confirmed the following: I have the HintPath value set properly in my project file (points to the external lib) I can also do a full Release and Debug build from the command line using msbuild I have tried do use the NUnit installer so nunit.framework.dll gets registered into the GAC I have changed the build agent's logon account to be a user on the test system, rather than LOCAL SYSTEM. Nothing seems to help... can anyone else here offer me some advice on what to try next?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >