Search Results

Search found 28928 results on 1158 pages for 'line of sight'.

Page 701/1158 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Firefox and Chrome Display "top: -5px differently"

    - by Kevin
    Using Google Web Toolkit, I have a DIV parent with a DIV and anchor children. <div class="unseen activity"> <div class = "unseen-label"/> <a href .../> </div> With the following CSS, Chrome shows the "unseen label" slightly below the anchor. which is positioned correctly in both Chrome and FireFox. However, FireFox shows the label in line with the anchor. .unseen-activity div.unseen-label { display: inline-block; position: relative; top: -5px; } and .unseen-activity a { background: url('style/images/refreshActivity.png') no-repeat; background-position: 0 2px; height: 20px; overflow: hidden; margin-left: 10px; display: inline-block; margin-top: 2px; padding-left: 20px; padding-right: 10px; position: relative; top: 2px; } Please tell me how to change my CSS so that Chrome render the label centered to the anchor. However, I need to keep FireFox happy and rendered correctly.

    Read the article

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • Functions that only call other functions. Is this a good practice?

    - by Eric C.
    I'm currently working on a set of reports that have many different sections (all requiring different formatting), and I'm trying to figure out the best way to structure my code. Similar reports we've done in the past end up having very large (200+ line) functions that do all of the data manipulation and formatting for the report, such that the workflow looks something like this: DataTable reportTable = new DataTable(); void RunReport() { reportTable = DataClass.getReportData(); largeReportProcessingFunction(); outputReportToUser(); } I would like to be able to break these large functions up into smaller chunks, but I'm afraid that I'll just end up having dozens of non-reusable functions, and a similar "do everything here" function whose only job is to call all these smaller functions, like so: void largeReportProcessingFunction() { processSection1HeaderData(); calculateSection1HeaderAverages(); formatSection1HeaderDisplay(); processSection1SummaryTableData(); calculateSection1SummaryTableTotalRow(); formatSection1SummaryTableDisplay(); processSection1FooterData(); getSection1FooterSummaryTotals(); formatSection1FooterDisplay(); processSection2HeaderData(); calculateSection1HeaderAverages(); formatSection1HeaderDisplay(); calculateSection1HeaderAverages(); ... } Or, if we go one step further: void largeReportProcessingFunction() { callAllSection1Functions(); callAllSection2Functions(); callAllSection3Functions(); ... } Is this really a better solution? From an organizational point of view I suppose it is (i.e. everything is much more organized than it might otherwise be), but as far as code readability I'm not sure (potentially large chains of functions that only call other functions). Thoughts?

    Read the article

  • Attributes of an Ethical Programmer?

    - by ahmed
    Software that we write has ramifications in the real world. If not, it wouldn't be very useful. Thus, it has the potential to sweep across the world faster than a deadly manmade virus or to affect society every bit as much as genetic manipulation. Maybe we can't see how right now, but in the future our code will have ever-greater potential for harm or good. Of course, there's the issue of hacking. That's clearly a crime. Or is it that clear? Isn't hacking acceptable for our government in the event of national security? What about for other governments? Cases of life-and-death emergency? Tracking down deadbeat parents? Screening the genetic profile of job candidates? Where is the line drawn? Who decides? Do programmers have responsibility for how their code is used? What if a programmer writes code to pry into confidential information or copy-protected material? Does he bear responsibility along with the person who used the program? What about a programmer who knowingly or unknowingly writes code to "fix the books?" Should he be liable?

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

    - by Justin Garrison
    In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation. As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus?

    - by george rowell
    ref: Close button on notify-osd? Bookmark: Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus? Currently, this script dbus-monitor "interface='org.freedesktop.Notifications'" | \ grep --line-buffered "member=Notify" | \ sed -u -e 's/.*/killall notify-osd/g' | \ bash will kill all pending notifications. It would be better to finesse the specific target OSD notification to cancel, by using org.freedesktop.Notifications.CloseNotification(uint id). Is there an interface method that can put this on (in?) the DBus to fire when a particular notify event occurs? The method will need to get the notify PID to use as the argument for CloseNotification(uint id). Alternatively, qdbus org.freedesktop.Notifications \ /org/freedesktop/Notifications \ org.freedesktop.Notifications.CloseNotification(uint id) could be used from the shell, if the (uint id) argument could be determined. The actual command syntax would use an integer in place of (uint id). Perhaps a better question to ask first might be "How is the DBus address for a notification found?". In hindsight the previous question "How is the (uint id) for a notification found?" is rhetorical! This previous answer: http://askubuntu.com/a/186311/89468 provided details so either method below can be used: gdbus call --session --dest org.freedesktop.DBus \ --object-path / \ --method org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: (uint32 8957,) or qdbus --literal --session org.freedesktop.DBus / \ org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: 8957

    Read the article

  • Dependency Injection and method signatures

    - by sunwukung
    I've been using YADIF (yet another dependency injection framework) in a PHP/Zend app I'm working on to handle dependencies. This has achieved some notable benefits in terms of testing and decoupling classes. However,one thing that strikes me is that despite the sleight of hand performed when using this technique, the method names impart a degree of coupling. Probably not the best example -but these methods are distinct from ... say the PEAR Mailer. The method names themselves are a (subtle) form of coupling //example public function __construct($dic){ $this->dic = $dic; } public function example(){ //this line in itself indicates the YADIF origin of the DIC $Mail= $dic->getComponent('mail'); $Mail->setBodyText($body); $Mail->setFrom($from); $Mail->setSubject($subject); } I could write a series of proxies/wrappers to hide these methods and thus promote decoupling from , but this seems a bit excessive. You have to balance purity with pragmatism... How far would you go to hide the dependencies in your classes?

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Laptop Display Not Working

    - by etrask
    Hello, I just purchased this laptop. It is working fine but I want to use Xubuntu on it. I managed to get 10.04 installed and running but it did not recognize/use the wireless card. So I want to try 10.10. However, every time I try, the screen goes black and never comes back on. I have tried the Live CD and Alternate install CDs, for both Ubuntu and Xubuntu 10.10. After I select "Install Ubuntu" or "Try Ubuntu", the screen blacks out and never displays anything. So I deleted "quiet" and "splash" off the command line to see what messages would come up. A bunch of text flies by but it ends at: [time] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [time] TCP bind hash table entries 65536 (order: 8, 1048576 bytes) [time] TCP: Hash tables configured (established 524288 bind 65536) [time] TCP reno registered [time] UDP hash table entries: 2048 (order: 4, 65536 bytes) [time] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [time] NET: Registered protocol family 1 And freezes here forever. I have tried nomodeset, vga=771, and xforcevesa with no progress. Is my video card simply not going to work with this distro? It seems strange that 10.04 would work fine but not 10.10. Any help is appreciated, thank you.

    Read the article

  • Cube chunk via list ToArray()

    - by Christian Frantz
    I've created a list of vertices that I call for each cube made in my array "cubes". When each cube is create, SetUpVertices is called which is a method that stores the 8 vertices of my cube. At the end of my list creation, I create a vertex buffer, and set the data of the list that contains vertices of all 25 cubes to that vertex buffer, effectively creating a "chunk" of cubes. The problem is that Invalid Operation Exception "The array is not the correct size for the amount of data requested." at the line vertices.ToArray(). I don't have an array for this, as the amount of cubes will be changing and arrays aren't dynamic. What could be the cause of this? for (int x = 0; x < 5; x++) { for (int z = 0; z < 5; z++) { SetUpVertices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z], z), color)); } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionColor), 8, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices.ToArray());

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Ubuntu 13.10 Unity doesn't load after upgrade

    - by William
    Just upgraded to Ubuntu 13.10 only to find that Unity won't load (login freezes, after doing ctrl+alt+F1, logging in and then doing startx, I get a blank desktop and the mouse pointer, and nothing else). I can right click, but the only operations that work are "create new file" and "create new folder". For example, "change desktop background" doesn't work. Also, after doing a few right clicks and choosing "change desktop background", I get a warning message box: "compiz closed unexpectedly." Guest login works fine. Tried creating a new user, but I experience the same thing with the new user. Tried removing all configuration files from my home directory... same thing. Doing dconf reset -f /org/compiz/ gives an error "error spawning command line..." Doing unity --reset also gives errors. Tried uninstalling unity (and compiz) and reinstalling, but that doesn't help. Tried reconfiguring lightdm, didn't help. I don't have any proprietary drivers installed. Once again, the funny thing is that the guest session works fine.

    Read the article

  • How do I fix a terrible system error on ubuntu 12.04

    - by Anonymous
    I don't know what happened, but one day my computer had some sort of a system error and could no longer update itself. The software center will not open, it will begin to initialize and then a message pops up saying theres a system error and needs to shut down the software center. Then another box pops up after I go to report it saying it was unable to identify source or package name. I also can't extract a zipped folder to anything, or reinstall Ubuntu from a USB boot drive anymore, it keeps telling me my my computer isn't compatible when I know for a fact it is, because thats how I got Ubuntu on here in the first place. the only thing I know about this error is that a message popped up after I went to check for updates saying to report the problem and include this message in the report: 'E:malformed line 56 in source list /etc/apt/sources.list (dist parse)' it also called it a bug. I just want to know how to either get rid of the bug completely or find some way to be able to reinstall Ubuntu again. I know it's not a lot of information, but It's all I can give. Sorry.

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • Mapping Your Customer Experience Journey

    - by Michael Hylton
    For those who attended today’s Oracle Customer Experience Summit keynote you heard from Brian Curran talk about the strategies and best practices to implement customer experience (CX) in your organization.  He spoke about how this evolving journey begins by understanding six steps to transform your business and put your customers front and center.  Here are those key six steps: What are the strategic business objectives in your company? What are your operational objectives and KPIs necessary to measure a CX project? Build an income statement and create “what if” scenarios and see how changes impact your business’ bottom line.  Explore what keeps you from getting to your own goals for your business. Define the business objectives and opportunities you want to meet? Understand the trends and accelerators in the market?  What factors are going on in the market affect that impact your business?  Social?  Mobile?  Cloud?  Just to name a few.  Many of these trends may signal a change in the way people think about your business. What approach will you take to solve these issues?  Understand who your customer is.  How do you need to adapt your business to build relevant, personalized customer experiences. What technologies can you implement to address CX?  Does technology help you solve your problem? A great way to begin your customer experience journey is a concept called journey mapping, one of the most powerful and deceptively simple tools for unlocking CX innovation at your organization. Here is where you can learn more about how you can bring this concept into your business to drive great customer experiences.

    Read the article

  • ifconfig can't see USB wireless

    - by Alex
    I have a wifi USB dongle which I have previously used on a Raspberry Pi (this it is what it is target at). I am trying to get it working on an Nvidia Jetson TK1, however I am having some problems. When I run ifconfig I can't see the wifi, only the ethernet and local loopback. iwconfig reports no wireless extensions on all devices. lsusb does find the device: Bus 002 Device 008: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter So I am not sure why the network tools can't see it. I have tried logging on with a GUI and opening up the network settings through Unity, but cannot see any wireless devices either. Not sure if this is useful, but output of lsmod: Module Size Used by nvhost_vi 2940 0 How can I enable wireless networking on this computer? Command line approach is preferred, but either is fine. UPDATE I don't have the kernel module rt2800usb anywhere on my system. If I do an apt-file search for rt2800usb it lists a number of packages of the pattern: linux-image-3.13.0-*. Perhaps installing one of these will do the trick, but can anyone tell me if its safe to do so?

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • Error: kernel headers not found. (But they are in place)

    - by Guandalino
    I'm trying to install the Guest Additions in VirtualBox 4.04. Host OS is Ubuntu desktop 11.04 64bit, guest OS is Ubuntu server 11.10 64bit. $ sudo ./VBoxLinuxAdditions.run After some output this line is printed: The headers for the current running kernel were not found. But the headers are installed, at least accordingly to dpkg: $ dpkg --get-selections | grep linux-headers linux-headers-3.0.0-12 install linux-headers-3.0.0-12-server install linux-headers-server install The running kernel is: $ uname -a Linux foobar 3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 X86_64 GNU/Linux How do I fix things so that Guest Additions installer is able to find kernel headers? Update: added full output. The headers for the current running kernel were not found. If the module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Installing the Window System drivers ...fails! (Could not find the X.Org or XFree86 Window System). I don't care for fail #2, because that's a server and I don't need X server. But I need shared folder support. Some further detail: $ tail /val/log/vboxadd-install.log .......... cc1: some warnings being treated as errors make[2]: *** [/tmp/vbox.0/vfsmod.o] Error 1 make[1]: *** [_module_/tmp/vbox.0] Error 2 make: *** [vboxsf] Error 2

    Read the article

  • What is appropriate way for managing MySQL connection through C#

    - by Sylca
    My question, at the bottom line, is what is the appropriate(best) way to manage our connection towards MySQL db with C#. Well, currently I'm working on some C# (winforms type) <- MySQL application and I've been looking at Server Connections in MySQL Administrator, been witness of execution of my mysql_queries, connection opens an closes, ... an so on! In my C# code I'm working like this and this is an example: public void InsertInto(string qs_insert) { try { conn = new MySqlConnection(cs); conn.Open(); cmd = new MySqlCommand(); cmd.Connection = conn; cmd.CommandText = qs_insert; cmd.ExecuteNonQuery(); } catch (MySqlException ex) { MessageBox.Show(ex.ToString()); } finally { if (conn != null) { conn.Close(); } } } Meaning, every time I want to insert something in db table I call this table and pass insert query string to this method. Connection is established, opened, query executed, connection closed. So, we could conclude that this is the way I manage MySQL connection. For me and my point of view, currently, this works and its enough for my requirements. Well, you have Java & Hibernate, C# & Entity Framework and I'm doing this :-/ and it's confusing me. Should I use MySQL with Entity Framework? What is the best way for collaboration between C# and MySQL? I don't want to worry about is connection that I've opened closed, can that same connection be faster, ...

    Read the article

  • Implementation Specialist OPN Exam for OBI Suite 11g is Now LIVE

    - by Mike.Hallett(at)Oracle-BI&EPM
    The OPN specialisation Exam for implementation consultants of OBI Suite 11g is now live and ready for all partners.  You can now update your specialisation certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g. For more details see Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) where you can apply for this Oracle Implementation Specialist credential. This exam is primarily intended for consultants that are skilled in implementing solutions based on Oracle Business Intelligence Foundation Suite 11g. The certification covers skills such as: installing OBIEE, building the BI Server metadata repository, building BI dashboards, constructing ad hoc queries, defining security settings and configuring and managing cache files. The exam targets the intermediate-level implementation team member. Up-to-date training and field experience are recommended.   Also Note the New OPN on-line Sales & Pre-sales Assessment Tests are available @ Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Business Intelligence Foundation Suite 11g Support Specialist FREE Certification Testing at OpenWorld ·       Are you attending OPN Exchange @ OpenWorld ? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel :  Pre-register here now!   For More Information OPN Certified Specialist exams OPN Certified Specialist FAQ Enablement 2.0 Get Specialized!

    Read the article

  • Grub2 won't detect Ubuntu 11.10 OS after reinstalling Win XP hal.dll.

    - by yoopian
    Hi I'm an Ubuntu newbie here. I've installed ubuntu 11.10 to dual boot on a single HDD. I did a manual partition and basically forgot all the on what sda my /boot partition is. My installation worked out just fine and I tried to install updates with it. After a while I when I wanted to boot to windows it showed that I was missing a "hal.dll" file. I've fixed this problem using the windows resource CD but then after booting up my PC it went straight to Windows XP. I've tried to manually reinstall Grub2 using a Live CD/USB and it worked but I think I have installed in on a different "sda#" (sda5 to be exact) because even though Grub2 loads when I boot my PC, only windows XP shows up as my OS and Ubuntu 11.10 is missing. Now, I've tried installing boot-repair to solve my problems using Live CD/USB. Boot-repair tells me that boot configuration was successful but then a basic grub interface shows up (the black one with a command line grub showing up. Now I can't even boot to Windows XP. Any help would be really appreciated. BTW here's the notes from boot repair that I was asked to save: http://paste.ubuntu.com/890228/ As you can see there are boot files on sda5 and sda7. I think that's the core problem that I have right now. Thanks in advance!

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >