Search Results

Search found 18993 results on 760 pages for 'context info'.

Page 537/760 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • No suspend on lid closing on a Samsung Series 5 14" NP530U4BI

    - by dmeu
    Ok, i realize I am not the only one, but I will try to provide all info possible to make it exemplary as possible and narrow down the error sources. I have a fresh install of Ubuntu 12.04 and the suspend worked fine upon having it freshly installed but now it does not anymore. The suspend option from the system power button on the top right works fine. Things I did do which I don't know if they are related: Install and remove againthe FGLRX drivers (Radeon graphic card) Install Jupiter power managment (shutting it down is not changin anything) Plug in and out an external display The configuration I know of is well set: In System Settings/Power all is set to suspend when closing lid Double checked with dconf-editor, everything set to suspend So, from here on I don't know how to proceed.. what are common problems that cause this error? EDIT: My computer model is: Samsung Series 5 14" NP530U4BI $ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] 02:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6230 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller 04:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller

    Read the article

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • How can I fix broken i915 drivers for Intel GPUs?

    - by Alen Mujezinovic
    I've got troubles getting the i915 drivers to work correctly on my laptop (HP Pavilion DM4 2101ea). Specifically, the laptop screen goes black and stays black after the splash graphic when booting both from USB key and from harddrive. To get anything on to the display after the splash screen I have to boot either with acpi=off nomodeset i915.modeset=0 I'd rather not turn ACPI off because I like my fans spinning and nomodeset is a bit overkill, so for now I'm booting with i915.modeset=0. Unfortunately, this turns off KMS and my current maximum resolution on the laptop screen is fixed to 1024x768 instead of its real capability. When not setting any of the above boot flags and I plug in an external monitor, the external monitor works fine. When booting with the flags, the external monitor works fine too, but can only do 1024x768 and can't do anything else than mirroring the laptop display. I did upgrade the i915 drivers from 2.17 that ship with Precise to 2.19 which are the most recent ones but without luck of getting anything to display. Here's my lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 08:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Here's lshw -C video *-display UNCLAIMED description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:c0000000-c03fffff memory:b0000000-bfffffff ioport:4000(size=64) Both outputs are generated after booting with i915.modeset=0. Here's a complete Xorg.log file from a boot into a black screen: https://gist.github.com/479ce06454e47d6123e1 The graphics card is a Intel HD 3000 integrated GPU. I've never had problems with Intel hardware on Ubuntu before so this is very surprising. If you could provide a method to make i915 work, suggest alternative drivers a way to boot with i915.modeset=0 but higher resolutions and KMS on or explain what is happening and how to fix it I'll give you an answer badge. :) Thanks

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • iwlwifi on lenovo z570 disabled by hardware switch

    - by Kevin Gallagher
    It was working fine with windows 7. The hardware switch is not disabled. I've toggled it back and forth dozens of times. The wifi light never turns on and it always lists as hardware disabled. I have the latest updates installed. I've been searching for solutions, but none of them seem to work for me. I've tried removing acer-wmi. I've tried setting 11n_disable=1. I've tried resetting the bios. I've tried using rfkill to unblock (only removes soft block). I've rebooted dozens of times. The wifi light turns off as soon as grub loads. Edit: I have a usb edimax wireless nic. It shows hardware disabled as well (although rfkill lists as unblocked). If I unload iwlwifi the usb nic works fine. uname -a `Linux xxx-Ideapad-Z570 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linu`x rfkill list 19: phy18: Wireless LAN Soft blocked: no Hard blocked: yes dmesg [43463.022996] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [43463.023002] Copyright(c) 2003-2011 Intel Corporation [43463.023107] iwlwifi 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [43463.023190] iwlwifi 0000:03:00.0: setting latency timer to 64 [43463.023253] iwlwifi 0000:03:00.0: pci_resource_len = 0x00002000 [43463.023257] iwlwifi 0000:03:00.0: pci_resource_base = ffffc900057c8000 [43463.023261] iwlwifi 0000:03:00.0: HW Revision ID = 0x0 [43463.023797] iwlwifi 0000:03:00.0: irq 43 for MSI/MSI-X [43463.024013] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [43463.024250] iwlwifi 0000:03:00.0: L1 Enabled; Disabling L0S [43463.045496] iwlwifi 0000:03:00.0: device EEPROM VER=0x15d, CALIB=0x6 [43463.045501] iwlwifi 0000:03:00.0: Device SKU: 0X50 [43463.045504] iwlwifi 0000:03:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [43463.045542] iwlwifi 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [43463.045744] iwlwifi 0000:03:00.0: RF_KILL bit toggled to disable radio. [43463.047652] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 [43463.047823] Registered led device: phy18-led [43463.047895] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [43463.048037] ieee80211 phy18: Selected rate control algorithm 'iwl-agn-rs' [43463.055533] ADDRCONF(NETDEV_UP): wlan0: link is not ready nm-tool State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: unavailable Default: no HW Address: 74:E5:0B:4A:9F:C2 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points lshw -C network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:4a:9f:c2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-55-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:43 memory:f1500000-f1501fff lspci 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]

    Read the article

  • Cursor running wild, then crashes on an Asus G73sw

    - by Yarchmon
    The cursor sometimes goes wild, I get random clicks, the windows are resizing, the cursor disappears. In the worst case, clicks and keyboards are disabled. I've tried the solution given on doc.ubuntu-fr.org and add tu grub : i8042.nomux=1 i8042.reset=1 in GRUB_CMDLINE_LINUX_DEFAULT But it didn't work What can I do ? Graphic card : Geforce GTX460M. Ubuntu : 11.10 (64 bits). Laptop Asus G73sw Interface : Unity (since 11.10) - didn't get this problem with Gnome before. Complement: when a window is resizing, it gets drag-boxes at every corner, center of sides and center of the window. It looks like my touchpad sends random info, or like a "ghost" touchscreen. lspci result : 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation GF106 [GeForce GTX 460M] (rev a1) 01:00.1 Audio device: nVidia Corporation GF106 High Definition Audio Controller (rev a1) 03:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 USB Controller: Fresco Logic FL1000G USB 3.0 Host Controller (rev 04) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) Edit 01-09-12: Tried on Ubuntu 2D: the behavior is different: it's like i'm randomly clicking on the workspace switcher icon. In the worst case, it can happen several times in a minute.

    Read the article

  • Is Cygwin or Windows Command Prompt preferable for getting a consistent terminal experience for development?

    - by Paul Hazen
    The question: Which is better, installing cygwin or one of its cousins on all my windows machines to have a consistent terminal experience across all my development machines, or becoming well trained in the skill of mentally switching from linux terminal to windows command prompt? Systems I use: OSX Lion on a Macbook Air Windows 8 on a desktop Windows 7 on the same desktop Fedora 16 on the same desktop What I'm trying to accomplish Configure an entirely consistent (or consistent enough) terminal experience across all my machines. "enough" in this context is clearly subjective. Please be clear in your answer why the configuration you suggest is consistent enough. One more thing to keep in mind: While I do write a lot of code intended to run on Windows (actually code that runs on Windows Phone which necessitates a windows machine), I also write a lot of Java code, and prefer to do so in vim. I test a local repo in Java on my windows machine, and push to another test machine running ubuntu later in the development stage. When I push to the ubuntu machine, I'm exclusively in terminal, since I'm accessing it via SSH. Summary, with more accurate question: Is there a good way to accomplish what I'm trying to do, or is it better to get accustomed to remembering different commands based on the system I'm on? Which (if either) is considered "best practice" by the development community? Alternatively, for a consistent development experience, would it be better to write all my code SSHed into another machine, and move things to windows for compile / build only when I needed to? That seems like too much work... but could be a solution. Update: While there are insightful responses below, I have yet to hear an answer that talks about why any given solution is superior. Cygwin/GnuWin32 is certainly a way to accomplish a similar experience on all platforms, but since I'm just learning all things command line, I don't want to set myself up to do a lot of relearning/unlearning in the future. Cygwin/GnuWin32 has its peculiarities I would imagine, and being aware of how that set up works on Windows is a learning curve. Additionally, using Cygwin/GnuWin32 robs me of learning the benefits of PowerShell. As a newcomer to working in a command line, which path should I choose to minimize having to relearn/unlearn things in the future? or as my first paragraph poses: [is it better to use Cygwin] ...or [become] well trained in the skill of mentally switching from linux terminal to windows command prompt?

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • Layout of mathematical views (iOS)

    - by William Jockusch
    I am trying to figure out the right way to encapsulate graphical information about mathematical objects. It is not simple. For example, a matrix can include square brackets around its entries, or not. Some things carry down to sub-objects -- for example, a matrix might track the font size to be used by its entries. Similarly, the font color and the background color would carry down to the entries. Other things do not carry down. For example, the entries of the matrix do not need to know whether or not the matrix has those square brackets. Based on all of the above, I need to calculate sizes for everything, then frames. All of this can depend on the properties stored above. The size of a matrix depends on the sizes of its entries, and also on whether or not it has those brackets. What I am having a hard time with is not the individual ways to calculate sensible frames for this or that. It is the overall organizational structure of the whole thing. How can I keep track of it all without going crazy. One particular obstacle is worth mentioning -- for reasons I don't want to go into here, I need to calculate the sizes and frames for everything before I instantiate any actual views. So, for example, if I have a Matrix object, I need to calculate its size before I make a MatrixView. If I have an equation, I need to calculate the size of the view for the equation before I create the actual view. So I clearly need separate objects for those calculations. But I can't figure out a sensible class structure for those objects. If I put them all into a single class, I get some advantages because copying then becomes easy. But I also end up with a bloated class that contains info that is irrelevant for some objects -- such as whether or not to include those brackets around the matrix. But if I use a lot of different classes, copying properties becomes a real pain. If it matters, this is all in Objective C, for an iOS environment. Any pointers would be greatly appreciated.

    Read the article

  • System freezes while not in use, how do I fix this?

    - by PHLAK
    Bare with me, the following is a bit winded. I have Ubuntu 10.10 Desktop 64-bit installed on my laptop and up until a few weeks ago it has been running great. Then one day, while I was not using the laptop it froze. I was logged in as my user but had locked the screen locked and closed the lid. I didn't notice that it had frozen until I opened the lid and wiggled the mouse to try and log in. The screen remained black and I got no response. I immediately tried Alt + F2, F3, F4, etc. but got no response. The only thing I could do was hold the power button to power off the machine. The freezing has happened as quickly as within 10-20 minutes of the system being logged off and lid closed and as long as 4-6 hours. My machine is NOT configured to go into standby when plugged in and this has happened both on AC power and battery. Troubleshooting I have performed: I uninstalled programs I knew that I had installed between when it was working fine and having problems. Those programs were CrashPlan, Shutter and Conky. After uninstalling ALL of these programs the freezing still occurs. Next, I decided to SSH into the machine from my desktop and leave an htop and tail of the syslog running. Here are screenshots of the last thing shown on both when the system froze: htop, syslog Here is a dump of my syslog after another freeze. The freeze happened at 9:14 and I didn't notice it until about 10 minutes later and rebooted, hence the 10 minute gap from 9:14 to 9:24. In the above syslog dump I noticed a lot of NVRM: os_raise_smp_barrier(), invalid context! and upon investigating that message learned it was from the proprietary Nvidia driver I had installed. Thinking this could be part of the problem I uninstalled the Nvidia driver and reverted to using the Nouveau driver. The computer still froze after a few hours. Lastly, thinking the problem could be caused by overheating I used compressed air to blow out any dust in the CPU vents and all other openings on the laptop. None of the above troubleshooting has helped and the freezing still occurs. What other steps can I take to troubleshoot and/or fix this problem? Note: Yesterday X started to eat up a lot of CPU power and eventually froze my system while I was forwarding an X session over SSH (from another PC to my laptop). I'm unsure if this is related or not as it doesn't match any of the symptoms of the problem above. Aside from this, the system has never frozen while in use, even under heavy load. EDIT: I just ran Memtest86+ and it made it through two passes without any errors. Just eliminating possible causes here.

    Read the article

  • Running C++ AMP kernels on the CPU

    - by Daniel Moth
    One of the FAQs we receive is whether C++ AMP can be used to target the CPU. For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface. When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU. Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow.  In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway). A few things to know about WARP It is our fallback CPU solution, not intended as a primary target of C++ AMP. WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP. What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on. It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship. When you create a WARP accelerator, its is_emulated property returns true. WARP does not currently support double precision.   BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • A sample Memento pattern: Is it correct?

    - by TheSilverBullet
    Following this query on memento pattern, I have tried to put my understanding to test. Memento pattern stands for three things: Saving state of the "memento" object for its successful retrieval Saving carefully each valid "state" of the memento Encapsulating the saved states from the change inducer so that each state remains unaltered Have I achieved these three with my design? Problem This is a zero player game where the program is initialized with a particular set up of chess pawns - the knight and queen. Then program then needs to keep adding set of pawns or knights and queens so that each pawn is "safe" for the next one move of every other pawn. The condition is that either both pawns should be placed, or none of them should be placed. The chessboard with the most number of non conflicting knights and queens should be returned. Implementation I have 4 classes for this: protected ChessBoard (the Memento) private int [][] ChessBoard; public void ChessBoard(); protected void SetChessBoard(); protected void GetChessBoard(int); public Pawn This is not related to memento. It holds info about the pawns public enum PawnType: int { Empty = 0, Queen = 1, Knight = 2, } //This returns a value that shown if the pawn can be placed safely public bool IsSafeToAddPawn(PawnType); public CareTaker This corresponds to caretaker of memento This is a double dimentional integer array that keeps a track of all states. The reason for having 2D array is to keep track of how many states are stored and which state is currently active. An example: 0 -2 1 -1 2 0 - This is current state. With second index 0/ 3 1 - This state has been saved, but has been undone private int [][]State; private ChessBoard [] MChessBoard; //This gets the chessboard at the position requested and assigns it to originator public ChessBoard GetChessBoard(int); //This overwrites the chessboard at given position public void SetChessBoard(ChessBoard, int); private int [][]State; public PlayGame (This is the originator) private bool status; private ChessBoard oChessBoard; //This sets the state of chessboard at position specified public SetChessBoard(ChessBoard, int); //This gets the state of chessboard at position specified public ChessBoard GetChessBoard(int); //This function tries to place both the pawns and returns the status of this attempt public bool PlacePawns(Pawn);

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Making A Photo-Sharing App For Android In Eclipse [on hold]

    - by user3694394
    I've only just started developing mobile apps, which is something that I've been wanting to learn for a while now. I'm from an indie games studio, making PC games for around the last 3 years, and I finally decided to move into android app development. The only problem I'm having is that I don't know where to start. The project which I'm aiming to create will be something similar to Instagram, basically a photo-sharing app which allows users to take new photos, or pull them from their device, and add filters to them, before posting them. I have a rough idea of how I could go about doing this, but I need pointing towards any tutorials available for each specific step. So, here's my idea: Create a UI in eclipse (this wont be a problem for me, I should be able to do this fine through xml files) Setup a server-side database to store all user info and uploaded images (the images will need to be converted into byte arrays, and I have no idea how to do this through a database). My best idea would be to use a MySQL database to do this. Add user interactions (likes, favourites, reposts, etc.). This would, again, have to be stored in the database (or, at least, i think it would). Add the ability to take new photos using the phone's camera (I can probably do this anyway, using the Camera API). Add the ability to pull existing photos from the device (again, pretty simple to do). Add the ability to add filters to any photos (I had a look around, and there are some repos and resources which allow you to do this, but they're mainly for iOS development). Add facebook/twitter integration (possibly) to allow phots to be shared to other social networks. Create a news feed which shows users all of the latest photos from their friends, and allows them to post their own images to it. Give all registered users their own wall/page which has their latest posts/images displayed on it. Add the ability to allow users to follow other users, and display their followed users posts on their news feed. Yep. It's not going to be easy, and I don't even know if it's possible for me to do alone in Eclipse. However, this is the plan, and I'm going to do my best to learn everything I need to know to do this successfully. My actual question would be how should I start doing this- where do I begin learning how to do all of this? I've had a look at snapify, which can be edited via Parse, but I won't be spending hundreds of dollars (since I'm 15 and just don't have the available funds) on software. I have extensive knowledge of Java (again, I've been making games for around 3 years, mainly in Java), and various scripting languages. So, hopefully, this will be of some use here. Thanks in advance, Josh.

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • What could be the best way to generalize data from Facebook and Twitter?

    - by Sjaak van der Heide
    I am not sure if this is the best subsite to ask this question, but I'm pretty sure it doesn't fit on the normal or facebook SO page... I've been asked to make a general API for connecting to several Social Media platforms (at the moment Facebook and Twitter). I have already realised both of them seperately. Meaning I retrieve the data I need from both Facebook and Twitter and hold the data in it's own dataclass. In my case a list of FacebookTimelineItems and a list of TwitterTimelineItems. now the hard part is taking the parts that are used in both (username, id, message and such) and make 1 general class that is eventually passed on to who/whatever sent the call to my API. these are two pics of the data classes I have: http://imageshack.us/photo/my-images/703/facebookdata.png/ http://imageshack.us/photo/my-images/204/twitterdata.png/ probably not 100% correct but it gives an idea what it looks like. Now I've been having several idea about how to go about and generalize the two, which is harder then I thought at first. Create an interface (TimelineItem) and let the other classes extend that one. this way I'll always be sure I have a class that contains at least the basic info I need. downside is that deserializing the JSON seems to be a nightmare. Use the two dataclasses I have and combine them into a new class afterwards, then pass that one back to whoever requested it. This would probably work but I get the idea it's not the best way to tackle this problem, and is pretty dodgy IF I get it working. Or, in case of the other two being nearly impossible. Keep the two seperated in the front end, and go sit in the corner crying because I've just figured out you can't lump together facebook and twitter... Note: I don't have to make the front end part (view), I just make sure the Model is nicely filled with data :) I hope I placed this in the right section, if I didn't I apologise and would like to know where I should go with my question. Thanks in advance for any replied/ideas/opinions on this.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >