Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 265/872 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Xamarin Wins Funding, Microsoft Builds Repair Tool

    Let's focus on the Xamarin news first. Xamarin is a young company with a phoenix-like history. Founded in May of 2011 by Miguel de Icaza and the rest of the team that created Mono, Xamarin got its start, effectively, as Ximian (de Icaza's previous company). Ximian was founded way back in 1999, and created Mono, which TechCrunch describes as an open source project that brings Microsoft's .NET development framework to non-Microsoft operating systems like Android, iOS and Linux. Novell acquired Ximian inn 2003, and continued to fund Mono's development. But apparently, when Attachmate bought No...

    Read the article

  • Other people's files showing up in rhythmbox

    - by Avery Boyer
    I have my computer connected to a college network, and right now files that belong to other individuals on campus are showing up under Shared in rhythmbox. This is driving me up the wall, I absolutely despise the idea that files are being thrown around on the network and that other people's s*** is showing up on my computer, and that they may be able to see my files as well. This is a very, very serious problem as far as I am concerned and I want to know how I can ensure that I am sharing nothing with the network in the way of files on my computer and that no one else's files are showing up on my computer.

    Read the article

  • unzip error "End-of-central-directory signature not found"

    - by Tim
    I try to unzip a zip file, but got an error: $ unzip COCR2_100.zip Archive: COCR2_100.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. note: COCR2_100.zip may be a plain executable, not an archive unzip: cannot find zipfile directory in one of COCR2_100.zip or COCR2_100.zip.zip, and cannot find COCR2_100.zip.ZIP, period. I googled but didn't find a solution. I was wondering why it is and how I should fix it? Thanks! The zip file can be downloaded from COCR2_100. It is an application, and here is its website http://users.belgacom.net/chardic/cocr2.html. My OS is 10.10.

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • What web hosts support multi-domain SSL?

    - by Bryan Hadaway
    For Consideration - Please do not close or refer this question to: How to find web hosting that meets my requirements? The above link does not refer to SSL certificates in any manner. This question has a very specific objective of listing known web hosts that support this new SSL technology. If I'm not mistaken, multi-domain (not wildcard) SSL is a relatively new technology that is not hugely supported or well-known/advertised yet? I'm having a difficult time discovering which web hosts support the technology (again because it's not popular enough yet to advertise on feature lists). Here is what I've discovered so far: Web Hosts That DO NOT Support Multi-domain SSL BlueHost/HostMonster DreamHost Web Hosts That DO Support Multi-domain SSL FireHost HostGator Please note that SUPPORT doesn't necessarily mean they offer the SSL certs themselves and you may need to purchase separately.

    Read the article

  • How can I repair a corrupt kernel if no others are installed?

    - by Willi Ballenthin
    I've been running Ubuntu 10.04 on my laptop for quite some time. When I went to boot it up this morning, BAM! kernel panic (which immediately lead to human panic) when loading the kernel. So I've spent much of the day troubleshooting, and my current theory is that the FS is fine, but that the kernel image may be corrupt. Let's go with this current theory for the sake of this question, as I am interested how it is done. How can I replace the kernel image if I have no bootable kernels? Can I boot to a 10.04 live CD, copy the the vmlinuz-2.6.3x... to the HD and go from there? Wouldn't I want to copy the initramfs as well, but configured for the desktop system? Can I generate this from the live CD?

    Read the article

  • 12.10 lost window maximize and minimize options, utility bar and launcher lost too

    - by user12448
    I recently installed Ubuntu 12.10 on my Dell Inspiron 7520. Everything is good except the hyperactive fans.The graphic cards are Intel and AMD Radeon 7730. I am trying to get away with the fan problem butI don't really know the terminal hence I am just trying to use the commands available on the forums.But this time I don't know what did I do that now I don't have the utility bar, launch bar and the basic features like the window minimize and maximize don't work. I have this impression I may have uninstalled some drivers . But still not sure what to do. Feel a need of dire help!

    Read the article

  • Did Microsoft Add Wiretapping Capability to Skype?

    Ryan Gallagher, writing for Slate, put two and two together from a lot of no comments. He noted that back in 2007, German police forces said that they couldn't tap into Skype calls because of of its strong encryption and complicated peer-to-peer network connections; in fact, Skype bluntly stated at the time that, due to its encryption and architecture techniques, it couldn't conduct wiretaps. But that may have changed. Gallagher cited a Forbes article that claims the hacker community is talking about recent changes to Skype's architecture and whether they will allow users to be wiretapped. ...

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • What is use of universal character names in identifiers in C++11

    - by Jan Hudec
    The new C++ standard specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Speaking again after a long hiatus

    Gosh, it has really been long since this blog saw some action and probably need some attention (weeding, trimming, re-planting, etc) After a long hiatus, I will be speaking again at a big event in Marina Bay Sands Singapore for our Future Of Productivity Launch on the 26th May 2010, keynoted by none other than Mr CEO - Steve Ballmer himself. Incidentally, this is also to celebrate our Microsoft Singapore 20th Anniversary. Agenda is here. I will be touching base on SQL Business Intelligence (BI),...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Solaris 11 Cheat Sheet

    - by Markus Weber
    Need to quickly know, or be reminded about, how to create network configuration profiles in Oracle Solaris 11 ?How to configure VLANS ?How to manipulate Zones ?How to use ZFS shadow migration ? To have those answers, and many more, neatly in front of you, we created this cheat sheet (pdf). Originally developed by Joerg Moellenkamp, the author of the very popular blog c0t0d0s0.org, and of the "Less Known Solaris Features", some more people at Oracle jumped in and added more and more very useful commands to it. And it may keep evolving, so keep checking ! The link to it can also be found on our new Oracle Solaris Evaluation page.

    Read the article

  • How to install wvdial without getting the following error?

    - by chndn
    When I ran the command: sudo apt-get install wvdial I got this error message : error p@p:~$ sudo apt-get install wvdial Reading package lists... Done Building dependency tree Reading state information... Done Package wvdial is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package wvdial has no installation candidate p@p:~$ So how can I install wvdial? I don't have access to internet from Ubuntu for direct installation. Please help

    Read the article

  • Unable to install ia32-libs package

    - by Ron Thomas
    I need to manually install the drivers for my graphics card, which is an ATI 2600 xt. But I am using a 64-bit platform and to do so i need the ia32-libs package. i have scoured the net for fixes, workarounds etc. but have found none that work. the output i get after trying apt-get is: Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs' has no installation candidate I am running linux mint 12

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • How to overcome fear to draw web-site template? [closed]

    - by Ricky
    I have some problem with web-site design. I can understand CSS, I use CSS grid frameworks, etc. But I can't realize my ideas. If I have some template, I can to translate it to HTML and CSS, but I don't understand how to begin with web-design for web-site from scratch. I think that I was afraid to draw or can't understand first steps. How to overcome this fear? May be some specific techniques like mind maps to streamline operations? or something else..? Or you can share what you are doing when begin. If any one have some ideas.. Thank you!

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • Booting 11.10 from USB stick on MacBook Pro 5,1 fails

    - by Helge Stenström
    I've created a bootable memory stick on a Windows computer, and tested it on an HP PC. It's made from a 64-bit image of Ubuntu 11.10, downloaded from http://www.ubuntu.com/download/ubuntu/download. When I boot from this memory stick, there is some kind of boot menu, where I can choose to run Ubuntu from the memory stick, or install. I select Run from memory stick. (the words may be wrong here, I'm taking it from memory.) From this point, the screen is black (but backlighted), and I can't do anything but turn off the computer. It gets hot, too. Has anyone been more successful than me? Are there known issues? The computer is a 15 inch MacBook Pro 5,1 (unibody, late 2008), 4 GB memory.

    Read the article

  • Starting out with 2D cross-platform game development [closed]

    - by Aran
    I am wanting to challenge myself to build a simple game, that has a character and a randomly generated world. If I get anywhere with it I may perhaps I'll develop it into something more, but the key challenge I want to tackle is cross-platform. I'd also want to have a go at creating engine myself, doing lighting and other bits. Is it worth me using a system like Unity or do I go down a more custom route? The game I would like to make is a 2D game so whether that changes the tools I should use, it would be great to know as well. Supporting mobiles isn't something I am worried about at moment, just looking for Mac and Windows for time being. In future I'll consider other platforms if I get anywhere with the development. So if anyone has any recommendations for a language, engine or system to use would love to her your thoughts.Including pros and cons would be helpful and appreciated and if you can do comparisons that would be awesome as well!

    Read the article

  • Looking for articles/books on: How do games make money? What models do they use?

    - by cable729
    I'm trying to research the ways in which games make money. I want to know more about the models they use (free/premium, trial/subscription, free-to-play with micro-transactions, etc.). In addition, I want information on which models work for which games, what models are best for which age groups, etc. I've tried my best to find information, and Google hasn't turned anything up at all. I think I'll stop by my University's library and see if there's anything there. This may seem like a broad question, but I'm looking for links and titles of books, not typed-out answers.

    Read the article

  • Windows 7 Display Color Calibration Wizard

    Windows 7 comes complete with plenty of new features for you to get the most out of your PC. While some of them may be obvious the operating system has some tools that can help make your life easier. This series will discuss those tools and how to use them. To kick things off let s take a look at the Display Color Calibration Wizard.... Comcast? Business Class - Official Site Learn About Comcast Small Business Services. Best in Phone, TV & Internet.

    Read the article

  • No sound on fresh install of 13.10

    - by Totalnon
    I installed a new copy of 13.10 today and cannot get my audio to work. Speakers test good on other devices, enabled in the BIOS, worked this morning in a Windows environment. I ran all of the available updates through apt-get. aplay -l lists my card (which tells me Ubuntu recognizes the hardware, I have been looking around the web and trying various things to no avail. Pulse folder permissions seem fine, I've reinstalled ALSA, and the card shows both digital and analog in the system settings sound section. I have tried this guide: Sound Troubleshooting Guide As well as looked through these forums under the tags sound and 13.10 Anyone have any ideas that may help me?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >