Search Results

Search found 5637 results on 226 pages for 'triple slash comments'.

Page 125/226 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

  • Blank pale blue screen with Live USB Kubuntu on AMD Sempron 2800+ processor

    - by WGCman
    I am trying to install Kubuntu onto a USB stick to use on my Acer Aspire 1362 laptop with an AMD Sempron 2800+ chip. Using Windows XP, I downloaded and saved to the laptop's hard drive: kubuntu-2.04.1-desktop-i386.iso from the GetKubuntu website and LinuxLive USB Creator 2.8.16.exe from the Linux live website I then installed the latter and ran it, installing the kubuntu onto the Memory stick. Leaving the Bios setup unchanged, the USB stick is ignored and Windows boots. If I change the Bios boot order so that the memory stick takes precedence, I see a dark blue screen announcing Kubukntu 12.04, and on selecting either “live Mode” or “Persistent mode”, messages flash by quickly, some of which appear to be error messages, including “trying to unpack rootfs image as initramfs”, “cannot allocate resource for mainboard”, “no plug and play device found”. Eventually I see a pale blue screen with four moving dots announcing Kubukntu 12.04, similar to the login screen of my Kubuntu desktop, but no invitation to log in or indeed any dialog. After several minutes, this changes to a black screen with more messages including “no caching mode present”, “ADDRCONF(NETDEV_UP): wlan0: link is not ready”, then degrades to a blank pale blue screen which can only be moved by switching the computer off. Finding no way to log the error messages passing by, I managed to photograph most of them, but know no way to attach the photo to this forum. As suggested by User 68186 (to whom thanks!), I have edited my original post to reflect the recent progress, so the following two comments are now superseded.

    Read the article

  • Novice prototyping a massive multiplayer webpage based gaming system

    - by Sean Hendlin
    I'm trying to build a website based game in which various pages of the site act as different areas of the game. I am wondering what you would recommended as a design structure. Which languages would be best if building what will hopefully becomes a massive system able to scale to massive amounts of users. I am wondering if and how various elements from differing languages could be meshed to interact with each other. For example could I use html5, javascript, and PHP? What about asp.net how might that factor in? I'm a newbie programmer but I've been working on this idea for years and I want to build it to reality. Your comments and suggestions are appreciated. P.S.: The game is not all graphics and animation (though flash like appearance and some animation would be nice). What I am thinking of is essentially a heavily gamified system of forms. And LOTS of data in many different categories cross referencing each-other. I'm not sure how to go about structuring the collection of data. Also while I know javascript can be used to process some functions, I'm wondering what sort of base system I would need to handle the server side processing of what I am expecting to be some pretty significant algorithm processing. That is to say I expect to have many many many functions and I'm not sure how to mange this using javascript. I feel like they would be forgotten, mixed up, disorganizes as they essentially only exist where they are coded. I guess I need to learn something of libraries? OK, Thank you! Is enough from me for now.

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Join our team at Microsoft

    - by Daniel Moth
    If you are looking for a SDE or SDET job at Microsoft, keep on reading. Back in January I posted a Dev Lead opening on our team, which was quickly filled internally (by Maria Blees). Our team is part of the recently announced Microsoft Technical Computing group. Specifically, we are working on new debugger functionality, integrated with Visual Studio (we are starting work on the next version), aimed to address HPC and GPGPU scenarios (and continuing the Parallel Debugging scenarios we started addressing with VS2010). We now have many more openings on our debugger team. We posted three of those on the careers website: Software Development Engineer Software Development Engineer II Software Development Engineer in Test II (don't let the word "Test" fool you: An SDET on our team is no different than a developer in any way, including the skills required) Please do read the contents of the links above. Specifically, note that for both positions you need to be as proficient in writing C++ code as you are with managed code (WPF experience is a plus). If you think you have what it takes, you wish to join a quality and schedule driven project, and want to contribute features to a product that has global impact, then send me your resume and I'll pass it on to the hiring managers. Comments about this post welcome at the original blog.

    Read the article

  • Should we choose Java over C# or we should consider using Mono?

    - by A. Karimi
    We are a small team of independent developers with an average experience of 7 years in C#/.NET platform. We almost work on small to average web application projects that allows us to choose our favorite platform. I believe that our current platform (C#/.NET) allows us to be more productive than if we were working in Java but what makes me think about choosing Java over C# is the costs and the community (of the open source). Our projects allow us even work with various frameworks as well as various platforms. For example we can even use Nancy. So we are able to decrease the costs by using Mono which can be deployed on Linux servers. But I'm looking for a complete ecosystem (IDE/Platform/Production Environment) that decreases our costs and makes us feel completely supported by the community. As an example of issues I've experienced with MonoDevelop, I can refer to the poor support of the Razor syntax on MonoDevelop. As another example, We are using "VS 2012 Express for Web" as our IDE to decrease the costs but as you know it doesn't support plugins and I have serious problems with XML comments (I missed GhostDoc). We strongly believe in strongly-typed programming languages so please don't offer the other languages and platforms such as Ruby, PHP, etc. Now I want to choose between: Keep going on C#, buy some products and be hopeful about openness of .NET ecosystem and its open source community. Changing the platform and start using the Java open source ecosystem

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • How can I set my screen resolution to match my TV?

    - by Scott Severance
    I have a computer in my classroom that's connected to an LG smart TV (that's actually not so smart. I wouldn't recommend buying one.). For the touch interface, the TV wants a resolution of 1920x1080 at 60Hz. However, I can't seem to set the computer to that resolution. The display settings only offer 1024x768 and 640x480. The computer dual boots with Windows XP, where widescreen options are available in approximately the required size, but the exact resolution -- or even aspect ratio-- isn't available in XP either. I tried the following command: xrandr -s 1920x1080 -r 60 The response was: Size 1920x1080 not found in available modes Back in the old days, the solution would be to edit xorg.conf. However, since that file no longer exists, and I haven't found up-to-date info, I don't know what else to do. If it helps, this machine will never be connected to a different display, so resolution flexibility isn't important. Here's the output of lshw: *-display:0 description: VGA compatible controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 03 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:fe800000-febfffff memory:d0000000-dfffffff ioport:ecd8(size=8) *-display:1 UNCLAIMED description: Display controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 03 width: 64 bits clock: 33MHz According to the system settings, my graphics driver is unknown and my "experience" is standard. This is 64-bit Ubuntu 12.04 (Precise) Note: There are a number of similar questions to this one, but they didn't include any answers that helped me. Update After posting this question, I noticed one in the sidebar that I hadn't found through search but which appeared to contain the answer. Based on that question, I created the /etc/X11/xorg.conf file below: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "glx" Load "dri2" Load "dbe" Load "dri" Load "record" Load "extmod" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "LG" ModelName "Smart TV" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "DRI" # [<bool>] #Option "ColorKey" # <i> #Option "VideoKey" # <i> #Option "FallbackDebug" # [<bool>] #Option "Tiling" # [<bool>] #Option "LinearFramebuffer" # [<bool>] #Option "Shadow" # [<bool>] #Option "SwapbuffersWait" # [<bool>] #Option "TripleBuffer" # [<bool>] #Option "XvMC" # [<bool>] #Option "XvPreferOverlay" # [<bool>] #Option "DebugFlushBatches" # [<bool>] #Option "DebugFlushCaches" # [<bool>] #Option "DebugWait" # [<bool>] #Option "HotPlug" # [<bool>] #Option "RelaxedFencing" # [<bool>] Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 #SubSection "Display" # Viewport 0 0 # Depth 1 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 4 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 8 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 15 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 16 #EndSubSection SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "1920x1080" EndSubSection EndSection According to /var/log/Xorg.0.log, my settings aren't being applied. In fact, I wonder if the config file is even being read. [ 1209.083] (**) intel(0): Depth 24, (--) framebuffer bpp 32 [ 1209.084] (==) intel(0): RGB weight 888 [ 1209.084] (==) intel(0): Default visual is TrueColor [ 1209.084] (II) intel(0): Integrated Graphics Chipset: Intel(R) G41 [ 1209.084] (--) intel(0): Chipset: "G41" [ 1209.084] (**) intel(0): Relaxed fencing enabled [ 1209.084] (**) intel(0): Wait on SwapBuffers? enabled [ 1209.084] (**) intel(0): Triple buffering? enabled [ 1209.084] (**) intel(0): Framebuffer tiled [ 1209.084] (**) intel(0): Pixmaps tiled [ 1209.084] (**) intel(0): 3D buffers tiled [ 1209.084] (**) intel(0): SwapBuffers wait enabled [ 1209.084] (==) intel(0): video overlay key set to 0x101fe [ 1209.172] (II) intel(0): Output VGA1 using monitor section Monitor0 [ 1209.260] (II) intel(0): EDID for output VGA1 [ 1209.260] (II) intel(0): Printing probed modes for output VGA1 [ 1209.260] (II) intel(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz) [ 1209.260] (II) intel(0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz) [ 1209.260] (II) intel(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 489 492 525 -hsync -vsync (31.5 kHz) [ 1209.260] (II) intel(0): Output VGA1 connected [ 1209.260] (II) intel(0): Using user preference for initial modes [ 1209.260] (II) intel(0): Output VGA1 using initial mode 1024x768 [ 1209.260] (II) intel(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 1209.260] (II) intel(0): Kernel page flipping support detected, enabling [ 1209.260] (==) intel(0): DPI set to (96, 96)

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • Discovering Your Project

    - by Tim Murphy
    The discovery phase of any project is both exciting and critical to the project’s success.  There are several key points that you need to keep in mind as you navigate this process. The first thing you need to understand is who the players in the project are and what their motivations are for the project.  Leaving out a key stakeholder in the resulting product is one of the easiest ways to doom your project to fail.  The better the quality of the input you have at this early phase the better chance you will have of creating a well accepted deliverable. The next task you should tackle is to gather the goals for the project.  Specifically, what does the company expect to get for the money they are about to layout.  This seems like a common sense task, but you would be surprised how many teams to straight to building the system.  Even if you are following an agile methodology I believe that this is critical. Inventorying the resources that already exists gives you an idea what you are going to have to build and what you can leverage at lower risk.  This list should include documentation, servers, code repositories, databases, languages, security systems and supporting teams.  All of these are “resources” that can effect the cost and delivery schedule of your project. Finally, you need to verify what you have found and documented with the stakeholders and subject matter experts.  Documentation that has not been reviewed is actually a list of assumptions and we all know that assumptions are the mother of all screw ups. If you give the discovery phase of your project the attention that it deserves your project has a much better chance of success. I would love to hear what other people find important for this phase.  Please leave comments on this post so we can share the knowledge. del.icio.us Tags: Project discovery,documentation,business analysis,architecture

    Read the article

  • Adding 2D vector movement with rotation applied

    - by Michael Zehnich
    I am trying to apply a slight sine wave movement to objects that float around the screen to make them a little more interesting. I would like to apply this to the objects so that they oscillate from side to side, not front to back (so the oscillation does not affect their forward velocity). After reading various threads and tutorials, I have come to the conclusion that I need to create and add vectors, but I simply cannot come up with a solution that works. This is where I'm at right now, in the object's update method (updated based on comments): Vector2 oldPosition = new Vector2(spritePos.X, spritePos.Y); //note: newPosition is initially set in the constructor to spritePos.x/y Vector2 direction = newPosition - oldPosition; Vector2 perpendicular = new Vector2(direction.Y, -direction.X); perpendicular.Normalize(); sinePosAng += 0.1f; perpendicular.X += 2.5f * (float)Math.Sin(sinePosAng); spritePos.X += velocity * (float)Math.Cos(radians); spritePos.Y += velocity * (float)Math.Sin(radians); spritePos += perpendicular; newPosition = spritePos;

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

  • How to recover broken dpkg after lucid-bleed ppa-purge?

    - by TryTryAgain
    Did a ppa-purge of lucid-bleed and dpkg didn't downgrade properly and now it is broken. dpkg: PreDepends: tar (>= 1.23) but 1.22-2ubuntu1 is to be installed What scares me is when simulating the removal of dpkg I get: Removing this package may render the system unusable. Are you sure you want to do that? and then the list of packages which depend on it, which will also be removed, is obviously very long. Is it safe for me to remove dpkg just to reinstall it? How would I ensure the list of packages which were also removed are then reinstalled? Will forcing the version of dpkg help? (FYI: simulating a forced version brings up a much smaller list of applications which will also be removed). Any other suggestions? Additional information based on comments: ppa-purge log: http://pastebin.com/1kT8cLvP If I sudo apt-get install dpkg=1.15.5.6ubuntu4.5 I get The following packages have unmet dependencies: libdpkg-perl: Depends: dpkg (= 1.15.8) but 1.15.5.6ubuntu4.5 is to be installed which sucks because that means more would be broken after doing so...but when I force the version through Synaptic I get: To be removed alien, build-essential, cdbs, checkinstall, debhelper, devscripts, dpkg-dev, google-earth-stable, googleearth-package, libdpkg-perl, lintian, lsb, lsb-core, lsb-cxx, lsb-desktop, lsb-graphics, lsb-languages, lsb-multimedia, lsb-printing, lsb-qt4, lsb-security, ubuntu-dev-tools.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • Announcing SharePoint Saturday Columbus 2010

    - by Brian Jackett
    It is with great pleasure that today I can announce the very first SharePoint Saturday Columbus.  SharePoint Saturday Columbus 2010 will be happening on August 14th at The Conference Center at OCLC in Dublin, OH.  As many of the readers of my blog may be aware I’ve attended or spoken at over half a dozen SharePoint Saturdays in the past 8 months alone, but this will be my first time actually organizing one.  Myself and a group of very dedicated individuals have been hard at work the past few months getting the ball rolling and we’re happy to see it taking shape.   Pertinent Resources Website – find announcements and up to the date details at www.SharePointSaturday.org/Columbus Twitter – follow us at @SPSColumbus Email – email us at [email protected] with any questions, comments, or concerns   What can you do?     There are three main areas that we are looking for your help at this time. Spread the word – simply put start spreading the word to friends, coworkers, user groups, clients, and anyone else you think may be interested in SharePoint Saturday Columbus 2010.  We’ll be opening registration in early July so look for an announcement with details closer to that timeframe. Sponsorship – if your company or a company you know is interested in sponsoring SharePoint Saturday Columbus 2010 we have many opportunity levels available.  Email [email protected] for more information and we’ll send you a sponsorship packet. Speakers – if you or someone you know is interested in presenting at SharePoint Saturday Columbus 2010 please fill out a speaker submission form found here and email it to [email protected] by July 10th. I hope you can join us for this great event!         -Frog Out

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available For Developers

    - by ultan o'broin
    The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. The design patterns are based on Oracle ADF components and easily implemented in Oracle JDeveloper. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments!.

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • We are moving an Access based corporate front-end into a Web-based App

    - by Max Vernon
    We have an enterprise application with a front end written in Microsoft Access 2003 that has evolved over the past 6 years. The back end data, and a fair amount of back-end logic is contained within several Microsoft SQL Server databases. This front end app consists of around 180 forms, and over 120,000 lines of code, and interacts with VB.Net DLLs that support various critical functions used by our sales force. The current system makes use of 3 monitors to display various information; the Access app uses COM+ to control Microsoft Outlook and Internet Explorer for various purposes. The Access front end sometimes occupies 2 screens, automatically resizing itself based on Windows API-reported screen dimensions. The app also uses a Google map to present data to our agents, and allows two-way interactivity with the map through COM+ connectivity to JavaScript contained in the Google map. At the urging of senior management, we are looking to completely rewrite this application using some web-based technology, such as ASP.Net or perhaps a LAMP stack (the thinking with the LAMP stack thing is "free" is pretty cheap). We want to move to a web-based app so we can eliminate the dependency on our physical location for hiring new sales force members. Currently, our main office is full to capacity, and we need to continue growing the company. Does anyone have any thoughts on what would be the best technology to use for a web-based app of this magnitude? Keeping in mind the app is dependent on back-end services on our existing infrastructure. The app handles financial data and personal customer data, among other things. [I've looked at Best practices for moving large MS Access application towards .Net? and read the answers, and most of the comments. Interesting reading, and has some valid points, but our C.O.O. and contracted Software Architect are pushing for a full web-based app, not a .Net Windows App]

    Read the article

  • An ideal way to decode JSON documents in C?

    - by AzizAG
    Assuming I have an API to consume that uses JSON as a data transmission method, what is an ideal way to decode the JSON returned by each API resource? For example, in Java I'd create a class for each API resource then initiate an object of that class and consume data from it. for example: class UserJson extends JsonParser { public function UserJson(String document) { /*Initial document parsing goes here...*/ } //A bunch of getter methods . . . . } The probably do something like this: UserJson userJson = new UserJson(jsonString);//Initial parsing goes in the constructor String username = userJson.getName();//Parse JSON name property then return it as a String. Or when using a programming language with associative arrays(i.e., hash table) the decoding process doesn't require creating a class: (PHP) $userJson = json_decode($jsonString);//Decode JSON as key=>value $username = $userJson['name']; But, when I'm programming in procedural programming languages (C), I can't go with either method, since C is neither OOP nor supports associative arrays(by default, at least). What is the "correct" method of parsing pre-defined JSON strings(i.e., JSON documents specified by the API provider via examples or documentation)? The method I'm currently using is creating a file for each API resource to parse, the problem with this method is that it's basically a lousy version of the OOP method, as it looks exactly like the OOP method but doesn't provide any OOP benefits(e.g., can't pass an object of the parser, etc.). I've been thinking about encapsulating each API resource parser file in a publicly accessed structure(pointing all functions/publicly usable variables to the structure) then accessing the parser file code from within the structure(parser.parse(), parser.getName(), etc.). As this way looks a bit better than the my current method, it still just a rip off the OOP way, isn't it? Any suggestions for methods to parse JSON documents on procedural programming lanauges? Any comments on the methods I'm currently using(either 3 of them)?

    Read the article

  • SEO - different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • Ubuntu installation does not recognize drive partinioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • How to add a new developer to the team

    - by lortabac
    I run a small company composed of only 2 developers. For one of our clients we are building a very big application, whose development has gone on for 1.5 years. Now this client has found an important sponsorship, and they are organizing some events related to this project, so we have a deadline in 2 months and we can't miss it. We are thinking of adding a new developer to the team, and I am wondering what we can do to help his integration. This is the situation: We are approaching the threshhold of Brooks's law, the point when adding new developers will be counter-productive. The application is relatively well designed, but the implementation is chaotic in some points (especially older code). There are unit tests only for more recent code. When this project started, we didn't have the habit of doing tests. Documentation and comments are incomplete. The application is both large and complex. The client has written down almost every detail about his project, in a very clear and "programmer-friendly" way. Is it a good idea to add a person now? If so, what can we do in order to help the new developer integrate into the team?

    Read the article

  • Hello PCI Council, are you listening?

    - by David Dorf
    Mention "PCI" to any retailer and you'll instantly see them take a deep breath and start looking for the nearest exit.  Nobody wants to be insecure, but few actually believe that PCI does anything more than focus blame directly on retailers.  I applaud PCI for making retailers more aware of the importance of security, but did you have to make them PAINFULLY aware?  POS vendors aren't immune to this pain either as we have to undergo lengthy third-party audits in addition to the internal secure programming programs.  There's got to be a better way. There's a timely article over at StorefrontBacktalk that discusses the inequity of PCI's rules, and also mentions that the PCI Council is accepting comments until April 15th. As a vendor, my biggest issue with PCI is that they require vendors to disclose the details of any breaches, in effect "ratting out" customers.  I don't think its a vendor's place to do this.  I'd rather have the trust of my customers so we can jointly solve the problem. Mary Ann Davidson, Oracle's Chief Security Officer, has an interesting blog posting on this very topic.  Its a bit of a long read, but I found it very entertaining and thought-provoking.  Here's an excerpt: ...heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give [the] PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. I encourage you to read the entire posting, Pain Comes Instantly, and then provide feedback to the PCI Council.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >