Search Results

Search found 4690 results on 188 pages for 'multi tenant'.

Page 150/188 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Gartner PCC: A Shovel & Some Ah-Ha's

    - by kellsey.ruppel
    When Gartner Vice President and leading analyst Whit Andrews kicked off the Gartner Portals, Content & Collaboration Summit on Monday, March 12 at the Gaylord Palms in Orlando, FL by bringing a shovel to the stage, eyebrows raised and a few thoughts went through my head. Either this guy plans to go help the construction workers outside construct that new pool at the Gaylord or he took a wrong turn and is at the wrong conference. Oh and how did he get that shovel through airport security? As Whit explained more his objective became more clear…take everything anyone has ever told you about portals and throw it out the window, as portals have evolved and times they are most certainly changing. The future Web is here, available not only on browsers but also via a broad spectrum of access points, including automobiles, consumer electronics and more and more mobile devices. Not merely prevalent, the future Web is also multimedia-driven and operates in real time, driven by mobility, social media, streaming video and other dynamic services. Applications and user experiences are in the midst of an evolution — from the early, simple mobile Web models to today’s Web 2.0 mobile apps and, ultimately, to a world of predominantly Web apps. Additionally, cloud services will forever change how portals and user experience are designed, built, delivered, sourced and managed. So what does this mean for you? Today’s organizations need software that will enable them to not just do their jobs, but to do it in a way that is familiar and easy for them.  What does this mean for IT? Use software and technology as an enabler, not as a roadblock. Overall, we had a great week in Orlando learning about how to improve the user experience, manage content explosion, launch social initiatives, transition to mobile environments and understand cloud and SaaS options.  We had some great conversations throughout the conference and at the Oracle booth. Lots of demonstrations were given of Oracle WebCenter Sites and Oracle Social Network. And as Christie mentioned earlier this week, our Vice President of Product Management and Strategy for WebCenter Loren Weinberg presented on the topic of customer engagement and talked about how organization’s relationships with their customers have fundamentally changed today and the resulting impact that has on their priorities.  Loren also talked about the importance of customer engagement, why that matters now more than ever, and what you can do to help your company or organization succeed in this new world. The question asked in every keynote and session was a simple one: What is your “ah-ha” moment? I personally had quite a few, some of which I’ve captured below. 70% of internal social initiatives eventually fail. By 2014, refusing to communicate with consumers via social media will be as harmful as ignoring emails/phone calls is today. Customer engagement = multi-channel + social & interactive + personal & relevant + optimized. If people choose to talk about your product/company/service, it's because it's remarkable. -- Seth Godin's keynote (one of the highlights of the conference!) The Web will become the primary method used for delivering content and applications to mobile devices. By 2015, 20% of smart phone users worldwide will conduct commerce using context-enriched services on a weekly basis.  86% of customers will pay more for a better customer experience. 6 P's of Quality User Experience. Product. Enabled by: People, Patterns, Process, Profit, Priorities. Did you attend the Gartner Summit? What were your ah-ha moments?

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • TechEd North America 2012 – Day 1 #msTechEd

    - by Marco Russo (SQLBI)
    Yesterday I and Alberto delivered the PreCon day about BISM Tabular in Analysis Services 2012. We received very good feedback and now I am looking forward to meet people that read our blogs and our books! Ping me on Twitter at @marcorus if you want to contact me during the conference. This is my schedule for the next few days: ·         Monday, June 11, 2012 o   10:30am-12:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   I will try to watch some sessions in the afternoon o   6:30pm-7:00pm I will be at the O’Reilly booth meeting book readers and doing some book signing ·         Tuesday, June 12, 2012 o   12:30pm-3:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   5:00pm-6:15pm I will attend the Alberto’s session DBI413 Many-to-Many Relationships in BISM Tabular (room S330E) o   6:15pm-9:00pm Community Night & Ask the Experts, we’ll discuss about Analysis Services, Tabular and Multidimensional! ·         Wednesday, June 13, 2012 o   11:15am-11:30am Don’t miss this special demo session at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone). I and Alberto will present Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) and you’re invited to guess the response time of DAX queries on a 4 billion rows table with many-to-many relationships before we run them! We’ll give away some 8GB USB key if you guess the right answer! o   12:30pm-1:00pm I and Alberto will have a book signing session at the TechEd Bookstore o   3:00pm-5:00pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) ·         Thursday, June 14, 2012 o   2:45pm-4:00pm I will deliver my DBI319 BISM: Multidimensional vs. Tabular breakthrough session in room S320A. I expect many questions here! And if you want to learn more about Analysis Services Tabular, we announced two more online sessions of our SSAS Tabular Workshop: ·         July 2-3, 2012 - SSAS Workshop Online - America's time zone ·         September 3-4, 2012 - SSAS Workshop Online - America's time zone Register now if you are interested, the early bird for the July session expires on June 19, 2012! I will also deliver a SSAS Workshop in Oslo (Norway) on August 27-28, 2012.  

    Read the article

  • Is there a theory for "transactional" sequences of failing and no-fail actions?

    - by Ross Bencina
    My question is about writing transaction-like functions that execute sequences of actions, some of which may fail. It is related to the general C++ principle "destructors can't throw," no-fail property, and maybe also with multi-phase transactions or exception safety. However, I'm thinking about it in language-neutral terms. My concern is with correctly designing error handling in C++ functions that must be reliable. I would like to know what the concepts below are called so that I can learn more about them. I'm sorry that I can't ask the question more directly. Since I don't know this area I have provided an example to explain my question. The question is at the end. Here goes: Consider a sequence of steps or actions executed sequentially, where actions belong to one of two classes: those that always succeed, and those that may fail. In the examples below: S stands for an action that always succeeds (called "no-fail" in some settings). F stands for an action that may fail (for example, it might fail to allocate memory or do I/O that could fail). Consider a sequences of actions (executed sequentially from left to right): S->S->S->S Since each action in the sequence above succeeds, the whole sequence succeeds. On the other hand, the following sequence may fail because the last action may fail: S->S->S->F So, claim: a sequence has the no-fail (S) property if and only if all of its actions are no-fail. Now, I'm interested in action sequences that form "atomic transactions", with "failure atomicity," i.e. where either the whole sequence completes successfully, or there is no effect. I.e. if some action fails, the earlier ones must be rolled back. This requires that any successfully executed actions prior to a failing action must always be able to be rolled back. Consider the sequence: S->S->S->F S<-S<-S In the example above, the first row is the forward path of the transaction, and the second row are inverse actions (executed from right to left) that can be used to roll back if the final top row actions fails. It seems to me that for a transaction to support failure atomicity, the following invariant must hold: Claim: To support failure atomicity (either completion or complete roll-back on failure) all actions preceding the latest failable (F) action on the forward path (marked * in the example below) must have no-fail (S) inverses. The following is an example of a sequence that supports failure atomicity: * S->F->F->F S<-S<-S Further, if we want the transaction to be able to attempt cancellation mid-way through, but still guarantee either full completion or full rollback then we need the following property: Claim: To support failure atomicity and cancellation mid-way through execution, in the face of errors in the inverse (cancellation) path, all actions following the earliest failable (F) inverse on the reverse path (marked *) must be no-fail (S). F->F->F->S->S S<-S<-F<-F * I believe that these two conditions guarantee that an abortable/cancelable transaction will never get "stuck". My questions are: What is the study and theory of these properties called? are my claims correct? and what else is there to know? UPDATE 1: Updated terminology: what I previously called "robustness" is called atomicity in the database literature. UPDATE 2: Added explicit reference to failure atomicity, which seems to be a thing.

    Read the article

  • New Sample Demonstrating the Traversing of Tree Bindings

    - by Duncan Mills
    A technique that I seem to use a fair amount, particularly in the construction of dynamic UIs is the use of a ADF Tree Binding to encode a multi-level master-detail relationship which is then expressed in the UI in some kind of looping form – usually a series of nested af:iterators, rather than the conventional tree or treetable. This technique exploits two features of the treebinding. First the fact that an treebinding can return both a collectionModel as well as a treeModel, this collectionModel can be used directly by an iterator. Secondly that the “rows” returned by the collectionModel themselves contain an attribute called .children. This attribute in turn gives access to a collection of all the children of that node which can also be iterated over. Putting this together you can represent the data encoded into a tree binding in all sorts of ways. As an example I’ve put together a very simple sample based on the HT schema and uploaded it to the ADF Sample project. It produces this UI: The important code is shown here for a Region -> Country -> Location Hierachy: <af:iterator id="i1" value="#{bindings.AllRegions.collectionModel}" var="rgn"> <af:showDetailHeader text="#{rgn.RegionName}" disclosed="true" id="sdh1"> <af:iterator id="i2" value="#{rgn.children}" var="cnty">     <af:showDetailHeader text="#{cnty.CountryName}" disclosed="true" id="sdh2">       <af:iterator id="i3" value="#{cnty.children}" var="loc">         <af:panelList id="pl1">         <af:outputText value="#{loc.City}" id="ot3"/>           </af:panelList>         </af:iterator>       </af:showDetailHeader>     </af:iterator>   </af:showDetailHeader> </af:iterator>  You can download the entire sample from here:

    Read the article

  • Hyper-V for Developers Part 1 Internal Networks

    Over the last year, weve been working with Microsoft to build training and demo content for the next version of Office Communications Server code-named Microsoft Communications Server 14.  This involved building multi-server demo environments in Hyper-V, getting them running on demo servers which we took to TechEd, PDC, and other training events, and sometimes connecting the demo servers to the show networks at those events.  ITPro stuff that should scare the hell out of a developer! It can get ugly when I occasionally have to venture into ITPro land.  Lets leave it at that. Having gone through this process about 10 to 15 times in the last year, I finally have it down.  This blog series is my attempt to put all that knowledge in one place if anything, so I can find it somewhere when I need it again.  Ill start with the most simple scenario and then build on top of it in future blog posts. If youre an ITPro, please resist the urge to laugh at how trivial this is. Internal Hyper-V Networks Lets start simple.  An internal network is one that intended only for the virtual machines that are going to be on that network it enables them to communicate with each other. Create an Internal Network On your host machine, fire up the Hyper-V Manager and click the Virtual Network Manager in the Actions panel. Select Internal and leave all the other default values. Give the virtual network a name, and leave all the other default values. After the virtual network is created, open the Network and Sharing Center and click Change Adapter Settings to see the list of network connections. The only thing I recommend that you do is to give this connection a friendly label, e.g. Hyper-V Internal.  When you have multiple networks and virtual networks on the host machines, this helps group the networks so you can easily differentiate them from each other.  Otherwise, dont touch it, only bad things can happen. Connect the Virtual Machines to the Internal Network Im assuming that you have more than 1 virtual machine already configured in Hyper-V, for example a Domain Controller, and Exchange Server, and a SharePoint Server. What you need to do is basically plug in the network to the virtual machine.  In order to do this, the machine needs to have a virtual network adapter.  If the VM doesnt have a network adapter, open the VMs Settings and click Add Hardware in the left pane.  Choose the virtual network to which to bind the adapter to. If you already have a virtual network adapter on the VM, simply connect it to the virtual network. Assign IP Addresses to the Virtual Machines on the Internal Network Open the Network and Sharing Center on your VM, there should only be 1 network at this time.  Open the Properties of the connection, select Internet Protocol Version 4 (TCP/IPv4) and hit Properties. In this environment, Im assigning IP addresses as 192.168.0.xxx.  This particular VM has an IP address of 192.168.0.40 with a subnet mask of 255.255.255.0, and a DNS Server of 192.168.0.18.  DNS is running on the Domain Controller VM which has an IP address of 192.168.0.18. Repeat this process on every VM in your environment, obviously assigning a unique IP address to each.  In an environment with a domain controller, you should now be able to ping the machines from each other. What Next? After completing this process, heres what you still cannot do: Access the internet from any of the VMs Remote desktop to a VM from the host Remote desktop to a VM over the network In the next post, well take a look configuring an External network adapter on the virtual machines.  Well then build on top of that so that you can RDP into the VMs from the host machine and over the network.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Why I switch from Asana.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/24/why-i-switch-from-asana.com.aspxI used Asana.com from 1-2 years. have nice experience to use it. it’s not so easy. When I started using it it’s make many confusion. Now I switch from it.   When I first time see I really didn’t understand how to make a private list. There is a icon on top click on it and make it private. After doing that I still not sure if this is working. There is a lot of confusion made that time. I discuss too much to figure out small small things. The UI is interesting but so hard to understand.  What I am looking for is just a list that I can hold private. I would like to share it only if I put them shared and put email address of person to hold them same list. Few days ago I see that My Win8 phone have a app that call Microsoft OneNote. The good thing of this MS app is that I can record my voice in the app. If someone want to make a list for future then he just need to say and this can be recorded.  This is awesome when you feel that Mobile keypad is just not so fast as a normal regular keyboard.   Google docs are another good option to handle this thing. Just make a word file and use it. share it with friend with many option. One best thing is this app have very simply UI then any other apps.   One more alternative is https://trello.com which you hear from joel on their blog http://www.joelonsoftware.com/items/2011/09/13.html There are many html5 and browser based, mobile based app. Many of them support multi platform feature. this means you can have them from PC to your Pocket. One good thing we all wanted is offline. if you are not online thing will be saved and push back to server when you will be online.   The biggest problem with some apps are they are attractive easy but hard to learn. Their one feature are not clearly defined what he does. This make frustration and confusion to user. When app are not simple to use people start stop trying to learn it. That’s all the problem I have with asana.com If you don’t want to try anything then what about Sticky Notes that is part of Windows 7. This app are still usable since you can store the text on it. If you know any good app to make a task list that provide access from tablet/mobile then put comment here. In the whole world of app there is a lot of app for doing this same thing differently. I mention few of them here. I hope this is nice to describe it.   Thanks for read my post.

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • django/python: is one view that handles two sibling models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. The view alters its behaviour based on input parameter action (better name would be mode), which can be of value "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate (but almost the same) logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Image, rel_model_tag, rel_object_id, mode) def video_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Video, rel_model_tag, rel_object_id, mode) def media_list(request, model, rel_model_tag, rel_object_id, mode): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['event'] = rel_object_id elif rel_model == Member: filter_params['members'] = rel_object_id elif rel_model == Crag: filter_params['crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if mode == 'edit': return media_edit_list(request, model, rel_model_tag, rel_object_id, context) return media_view_list(request, model, rel_model_tag, rel_object_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_edit_record = get_image_edit_record else: get_media_edit_record = get_video_edit_record media_list = [get_media_edit_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_edit_record(context['star_media'], rel_model_tag, rel_object_id) else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) def get_image_edit_record(image, rel_model_tag, rel_object_id): record = { 'url': image.image.url, 'name': image.title or image.filename, 'type': mimetypes.guess_type(image.image.path)[0] or 'image/png', 'thumbnailUrl': image.thumbnail_2.url, 'size': image.image.size, 'id': image.id, 'media_id': image.media_ptr.id, 'starUrl':reverse('image-star', kwargs={'image_id': image.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record def get_video_edit_record(video, rel_model_tag, rel_object_id): record = { 'url': video.embed_url, 'name': video.title or video.url, 'type': None, 'thumbnailUrl': video.thumbnail_2.url, 'size': None, 'id': video.id, 'media_id': video.media_ptr.id, 'starUrl': reverse('video-star', kwargs={'video_id': video.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.ForeignKey('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.ForeignKey('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • django/python: is one view that handles two separate models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. It alters its behaviour based on input parameter action, which can be either "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Image, rel_model_tag, rel_object_id, action) def video_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Video, rel_model_tag, rel_object_id, action) def media_list(request, model, rel_model_tag, rel_object_id, action): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['media__event'] = rel_object_id elif rel_model == Member: filter_params['media__members'] = rel_object_id elif rel_model == Crag: filter_params['media__crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('media__date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if action == 'edit': return media_edit_list(request, model, rel_model_tag, rel_model_id, context) return media_view_list(request, model, rel_model_tag, rel_model_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_record = get_image_record else: get_media_record = get_video_record media_list = [get_media_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_record(star_media, rel_model_tag, rel_object_id) star_media['starred'] = True else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) def __unicode__(self): return self.title def get_absolute_url(self): return self.image.url if self.image else self.video.embed_url class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.OneToOneField('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.OneToOneField('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Entity Framework - Condition on one to many join (Lambda)

    - by nirpi
    Hi, I have 2 entities: Customer & Account, where a customer can have multiple accounts. On the account, I have a "PlatformTypeId" field, which I need to condition on (multiple values), among other criterions. I'm using Lambda expressions, to build the query. Here's a snippet: var customerQuery = (from c in context.CustomerSet.Include("Accounts") select c); if (criterions.UserTypes != null && criterions.UserTypes.Count() > 0) { List<short> searchCriterionsUserTypes = criterions.UserTypes.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(CommonDataObjects.LinqTools.BuildContainsExpression<Customer, short>(c => c.UserTypeId, searchCriterionsUserTypes)); } // Other criterions, including the problematic platforms condition (below) var customers = customerQuery.ToList(); I can't figure out how to build the accounts' platforms condition: if (criterions.Platforms != null && criterions.Platforms.Count() > 0) { List<short> searchCriterionsPlatforms = criterions.Platforms.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(c => c.Accounts.Where(LinqTools.BuildContainsExpression<Account, short>(a => a.PlatformTypeId, searchCriterionsPlatforms))); } (The BuildContainsExpression is a method we use to build the expression for the multi-select) I'm getting a compilation error: The type arguments for method 'System.Linq.Enumerable.Where(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. Any idea how to fix this? Thanks, Nir.

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >