Search Results

Search found 16208 results on 649 pages for 'pass community'.

Page 487/649 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • No database selected error in CodeIgniter running on MAMP stack

    - by Apophenia Overload
    First off, does anyone know of a good place to get help with CodeIgniter? The official community forums are somewhat disappointing in terms of getting many responses. I have ci installed on a regular MAMP stack, and I’m working on this tutorial. However, I have only gone through the Created section, and currently I am getting a No database selected error. Model: <?php class submit_model extends Model { function submitForm($school, $district) { $data = array( 'school' => $school, 'district' => $district ); $this->db->insert('your_stats', $data); } } View: <?php $this->load->helper('form'); ?> <?php echo form_open('main'); ?> <p> <?php echo form_input('school'); ?> </p> <p> <?php echo form_input('district'); ?> </p> <p> <?php echo form_submit('submit', 'Submit'); ?> </p> <?php echo form_close(); ?> Controller: <?php class Main extends controller { function index() { // Check if form is submitted if ($this->input->post('submit')) { $school = $this->input->xss_clean($this->input->post('school')); $district = $this->input->xss_clean($this->input->post('district')); $this->load->model('submit_model'); // Add the post $this->submit_model->submitForm($school, $district); } $this->load->view('main_view'); } } database.php $db['default']['hostname'] = "localhost:8889"; $db['default']['username'] = "root"; $db['default']['password'] = "root"; $db['default']['database'] = "stats_test"; $db['default']['dbdriver'] = "mysql"; $db['default']['dbprefix'] = ""; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ""; $db['default']['char_set'] = "utf8"; $db['default']['dbcollat'] = "utf8_general_ci"; config.php $config['base_url'] = "http://localhost:8888/ci/"; ... $config['index_page'] = "index.php"; ... $config['uri_protocol'] = "AUTO"; So, how come it’s giving me this error message? A Database Error Occurred Error Number: 1046 No database selected INSERT INTO `your_stats` (`school`, `district`) VALUES ('TJHSST', 'FairFax') Is there any way for me to test if CodeIgniter can actually detect the mySQL databases I've created with phpMyAdmin in my MAMP stack?

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • All for one and one for all…the power of partnership in higher education

    - by Student Solutions Team-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Recently, several of our Oracle student solutions team members were in Latin America at a user group conference. Not an Oracle user group—although the conference was about and for higher education customers using Oracle software—but a Higher Education User Group (HEUG) conference. So what’s the difference? First of all, the HEUG is an entirely independent organization from Oracle, incorporated as a 501(c)(3) non-profit corporation governed by a Board of Directors. As a self-governing organization, the more than 23,000 higher education members (and growing!) actively participate in a multitude of initiatives, communications and shared-learning opportunities that benefit each of them and their institutions. For example, one of these programs includes 16 active and effective Product Advisor Groups (PAGs) that interact directly with Oracle management, developers and business partners to provide input into product strategies and improvements. The HEUG also provides a variety of online tools to help its members navigate the world of Oracle applications software. There’s a lot more that this organization does, but you can go to www.heug.org yourself to learn more. We want to get back to our story! Anyway, as we were leaving the HEUG conference in Latin America, one of the guests invited to attend commented: “Do these users realize and appreciate how many people from Oracle come to support them? You have a much larger representation at these types of conferences than any other vendor. It shows the tremendous support you have for your higher education customers.” So that’s it! This is why the partnership between the HEUG and Oracle is so powerful and unique in the software industry. Two distinct, independent organizations come together focused entirely on providing the highest value and mutual benefit to each member, each organization and the larger higher education community. Through open communications and active engagement since the HEUG was formed in 1998, our partnership today is stronger than it has ever been and membership growing globally. Result? Everyone benefits. All for one and one for all—we are in this together. We’ve got a lot going on in the student solutions team and are working closely with customers and the HEUG to move ahead on continued development for PeopleSoft Campus Solutions 9.2 and a new Oracle Student Cloud. Come back here for more stories, news and information! --Oracle Student Solutions Team  

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • How to Set Up Your Enterprise Social Organization

    - by Mike Stiles
    The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Reflections on Life

    - by MOSSLover
    I haven’t written a blog post in a while.  I understand there is blog neglect going on, but there is a lot going on in my life.  I am trying really hard to embrace the change and roll with everything thrown my way.  I had a really hard year it was not my best and it was not my worst.  I cannot say it was entirely hard, because January 1st I received the MVP Award.  If you know me you know the three things that happened starting in August, but if you really know me it was miserable for a substantial period of time prior to August.  There was some personal life issues I neglected to deal with that came into a headway.  Anyway I’d like to think that as of today I am doing much better.  I finally went to Paris and London.  I found out I love Paris and Nottingham.  I think that London is something I need to visit a few more times.  I would love to go back to the UK and France.  I think I’d love to live overseas someday, but not anytime soon. The past few weeks were like a whirlwind experience.  I felt like I had been sitting around for months just waiting for this trip and the big move.  Maybe it was something I was waiting to do for several years.  I needed a big change.  I needed to get unstuck.  I feel like August, however horrible it was, helped me get to the point where I am somewhere happy.  For at least two years I have been miserable outside of my work (community and otherwise).  I was just downright unhappy.  One of my coworkers said that my tweets were just horrible this past year.  Depressing might I add.  I agree they were incredibly depressing for the past several years.  But things are on an upturn.  I decided a month or so ago that I was going to do all the things I have wanted to do without looking back.  So I dove into this trip and into this move to NYC head first.  I was scared for a bit and I didn’t think it would come through.  Everyone friend-wise and coworker-wise has helped me accomplish this great feat.  I am now a New Yorker and as of January 1st 100% living in the city. Thank you for those who have checked up on me.  Thank you for those who listened to all my problems and continue to do so.  Thank you to everyone who has helped me through this really terrible time.  You guys mean the world to me.  You are my friends.  Some of you I have not met and some of you I barely know.  I have been to a lot of events where people just walked up to me and asked me if I was doing ok.  I will continue to keep moving forward one foot in front of the other.  If I ever get so down again please remind me about this year.  I hope to see you all in the upcoming year as I attend more events.  Have a good night or a good morning or a good afternoon.  I will catch you all later. Technorati Tags: Life,2011,Disaster Year,Happinness

    Read the article

  • Play a New Random Game Each Day in Chrome

    - by Asian Angel
    Being able to unwind for a few moments each day can make the time pass so much better and help you feel refreshed. If your favorite method for relaxing is playing a quick game, then join us as we take a look at the Random Games from MyGiochi.net extension for Google Chrome. Random Games from MyGiochi.net in Action The really great thing about this extension is that each day you can have a new random game to play. If you love variety this is definitely going to be a perfect match for you. We got “Power Golf” as our random game of the day. Here is a look at things once we got started…this one can be a lot of fun to play. Time to move on to the third hole now… What if you want something different from the game available on any given day? In the upper right corner you will find links for “game categories” that you can look through (clicking on the links will open a new tab). Since the links are in Italian you might need to experiment a little bit to find the category that you want to browse through. We chose the “Games for Girls Category”. With Chrome’s new built in “Translation Bar” you can easily switch the page over to the language of your choice. Note: Translation Bar available in Dev Channel releases. Ready to choose a fun game to play! You really can have a lot of fun with the games available at My Giochi. With our “game of the day” we had a second option for other games to try. More games equals more fun! Conclusion If playing online games is your favorite way to relax then the MyGiochi.net extension will make a great addition to your browser. Have fun with all of those new games each day! Links Download the Random Games from MyGiochi.net extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Geek Fun: Play Alien Arena the Free FPS GamePlay Avalanche!! in Google ChromeFriday Fun: Get Your Mario OnFriday Fun: Play Bubble QuodFriday Fun: 13 Days in Hell TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet

    Read the article

  • Gamify your Web

    - by Isabel F. Peñuelas
    Yesterday Valencia welcomed the Gamification World Congress that I follow virtually through #GWC2012. BBVA, Iberia, Ligeresa, Axe, Wayra, ESADE, GlaxoSmithKline, Macmillan, Gamisfaction, Nomaders, Blaffin were among the companies presenting success stories on gaming. It has been proved that people remember things easily when an emotion is created. The marketing expectations around Gamification techniques have a lot to do with Neuromarketing theories. There are a lot of expectations on internal enterprise Gamification. In the public Web some sectors are taking the lead on following the trend. The Gartner Analyst Brian Burke opened another Gamification recent event in Madrid remembering that “Gamification is mostly about Engagement”, and this can be applied both to customers or employees. Gamification and Banking The experience of the Spanish Financial Group BBVA that just launched BBVA Game was also presented a week ago at the BBVA Innovation Centre during the event “Gamification & Banking: a fad or a serious business?” . One of the objectives of the BBVA Game was to double the name of registered users. “People like the efficiency of the online channel want to keep a one-to-one contact with the brand”-explained Bernardo Crespo. Another interested data coming out the BBVA presentation was that “only 20% of Spanish users –out of the total holders of Bank Accounts in the country- is familiar with the use of a Web Site to consult their bank accounts”, the project aims also to reverse this situation helping people to learn making a heavy use of the Video in the gaming context. In general Banking presenters seem to agree that Gamification techniques are helping to increase the time spent on the Web. Gamification and Health Using Gamification techniques for chronic illness rehabilitation was another topic of the World Congress. Here you can find some ideas and experiences What can games do for the health (In Spanish) I have personally started my own mental-health gaming project at http://www.lumosity.com/ Gamification in the Enterprise I really recommend Reading this excellent post of Ultan ÓBroin my Introduction to Gamification and Applications. Employee´s motivation and learning are experiencing a 360º turn and it looks than some of us will become soon the Dragon of the year instead of the Employee of the Year. Using Web 2.0 Tools for Gamification Projects  What type of tools do we need for a quick-win Gamification project? To certain extend Gamification can be considered an evolution of the participative Web. Badging, avatars, points and awards, leader boards, progress charts, virtual currencies, gifting and giving challenges and quests are common components and elements. The Web is offering new development frameworks to that purpose as this Avatar Framework from Paypal or Badgeville to include in web applications. Besides, tools to create communities around a game are required to comment, share and vote players as well as for an efficient multimedia management. Due to its entirely open architecture, its community features, and its multimedia and imaging solutions is were I see WebCenter as a tool helping brands to success. Link to Sources & Recommended Readings YouTube Video of BBVAGame presentation Where To Apply Gamification In Your Incentive Jim Calhoun Cancer Challenge Ride and Walkh For my Spanish Readers El aburrimiento es el enemigo número uno del éxito

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • Silverlight Cream for December 13, 2010 -- #1010

    - by Dave Campbell
    In this Issue: Rénald Nollet, Benjamin Gavin, Dennis Doomen, Tim Greenfield, Mike Taulty, Jeff Blankenburg, Michael Crump, Laurent Duveau, Dragos Manolescu, KeyboardP, Yochay Kiriaty. Above the Fold: Silverlight: "Silverlight RIA Services and Basic, Anonymous Authentication" Benjamin Gavin WP7: "lving Circular Navigation in Windows Phone Silverlight Applications" Yochay Kiriaty SQL Azure: "SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB" Rénald Nollet Shoutouts: Yochay Kiriaty has a post up on the Windows Phone Devloper Blog about open source (MSPL) projects helping WP7 devs: Windows Phone Recipes – Helping the Community Jesse Liberty's latest Yet Another Podcast is up and thie time it's Joe Stagner: Yet Another Podcast #18 – Joe Stagner Josh Schwartzberg sent me this link to what is apparently his yearly web-only rock Christmas album: MetalXmas... done in Silverlight and RIA Services From SilverlightCream.com: SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB Rénald Nollet posted Part 1 of a series on a SQL Azure database manager all in Silverlight... has a live demo running, some description, and is making us wait for the next part! Silverlight RIA Services and Basic, Anonymous Authentication Benjamin Gavin has a quick post up resolving a basic RIA Services problem that I bet a lot of folks are looking for the answer on... like 500 series errors... cool little find he ferreted out... A night of Silverlight, WPF, unit testing and Caliburn Micro Dennis Doomen in concert with his employer gave a couple talks at the local DotNED user group, and covered literally a cornucopia of topics... slides, and example code for both talks... lotsa material here... Tim Greenfield on PuzzleTouch WP7 Application Tim Greenfield is the latest WP7 app developer to be interviewed by the SilverlightShow crew... lots of interesting comments and insight from Tim. Rebuilding the PDC 2010 Silverlight Application (Part 4) Mike Taulty has part 4 of his PDC 2010 Silverlight app construction project up and is taking the app into Blend, and the considerations that brought to the table. What I Learned In WP7 – Issue #2 Jeff Blankenburg continues his "What I Learned" series with this discussion about fonts, the Non-Linear Navigation service I mention below, and possible WP7 jobs. Part 3 of 4 : Tips/Tricks for Silverlight Developers Michael Crump has Part 3 of his Tips/Tricks up today. Lots of goodies this time: underlining in a TextBlock, getting browser info, startup params, VisualTreeHelper, and child windows. My Windows Phone 7 presentation in Montreal Laurent Duveau gave a WP7 presentation in Montreal as part of the Microsoft Windows Phone 7 Developer's Briefing, and has posted his materials and slide deck WP7 Code: Mocking Event Streams with IEnumerable Dragos Manolescu has a very cool post up on using IEnumerable to Mock event streams by leveraging the IObservable/IEnumerable duality, and uses the 2D bubble app that you can run and test in the emulator without needing an accelerometer Transparent Wallpapers – Video Tutorial KeyboardP has had so many queries about his Transparent wallpaper for WP7 that he produced a video tutorial for it... Solving Circular Navigation in Windows Phone Silverlight Applications Yochay Kiriaty discusses the first recipe they are releasing ... see the shoutout above, a Nonlinear Navigation Service ... to help with apps that have loops in navigation. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Get to Know a Candidate (16 of 25): Stewart Alexander&ndash;Socialist Party USA

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Alexander is an American democratic socialist politician and a resident of California. Alexander was the Peace and Freedom Party candidate for Lieutenant Governor in 2006. He received 43,319 votes, 0.5% of the total. In August 2010, Alexander declared his candidacy for the President of the United States with the Socialist Party and Green Party. In January 2011, Alexander also declared his candidacy for the presidential nomination of the Peace and Freedom Party. Stewart Alexis Alexander was born to Stewart Alexander, a brick mason and minister, and Ann E. McClenney, a nurse and housewife.  While in the Air Force Reserve, Alexander worked as a full-time retail clerk at Safeway Stores and then began attending college at California State University, Dominguez Hills. Stewart began working overtime as a stocking clerk with Safeway to support himself through school. During this period he married to Freda Alexander, his first wife. They had one son. He was honorably discharged in October 1976 and married for the second time. He left Safeway in 1978 and for a brief period worked as a licensed general contractor. In 1980, he went to work for Lockheed Aircraft but quit the following year.  Returning to Los Angeles, he became involved in several civic organizations, including most notably the NAACP (he became the Labor and Industry Chairman for the Inglewood South Bay Branch of the NAACP). In 1986 he moved back to Los Angeles and hosted a weekly talk show on KTYM Radio until 1989. The show dealt with social issues affecting Los Angeles such as gangs, drugs, and redevelopment, interviewing government officials from all levels of government and community leaders throughout California. He also worked with Delores Daniels of the NAACP on the radio and in the street. The Socialist Party USA (SPUSA) is a multi-tendency democratic-socialist party in the United States. The party states that it is the rightful continuation and successor to the tradition of the Socialist Party of America, which had lasted from 1901 to 1972. The party is officially committed to left-wing democratic socialism. The Socialist Party USA, along with its predecessors, has received varying degrees of support, when its candidates have competed against those from the Republican and Democratic parties. Some attribute this to the party having to compete with the financial dominance of the two major parties, as well as the limitations of the United States' legislatively and judicially entrenched two-party system. The Party supports third-party candidates, particularly socialists, and opposes the candidates of the two major parties. Opposing both capitalism and "authoritarian Communism", the Party advocates bringing big business under public ownership and democratic workers' self-management. The party opposes unaccountable bureaucratic control of Soviet communism. Alexander has Ballot Access in: CO, FL, NY, OH (write-in access in: IN, TX) Learn more about Stewart Alexander and Socialist Party USA on Wikipedia.

    Read the article

  • How to Set Up Your Enterprise Social Organization?

    - by Richard Lefebvre
    By Mike Stiles on Dec 04, 2012 The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • ASP.Net MVC - how to post values to the server that are not in an input element

    - by David Carter
    Problem As was mentioned in a previous blog I am building a web page that allows the user to select dates in a calendar and then shows the dates in an unordered list. The problem now is that those dates need to be sent to the server on page submit so that they can be saved to the database. If I was storing the dates in an input element, say a textbox, that wouldn't be an issue but because they are in an html element whose contents are not posted to the server an alternative strategy needs to be developed. Solution The approach that I took to solve this problem is as follows: 1. Place a hidden input field on the form <input id="hiddenDates" name="hiddenDates" type="hidden" value="" /> ASP.Net MVC has an Html helper with a method called Hidden() that will do this for you @Html.Hidden("hiddenDates"). 2. Copy the values from the html element to the hidden input field before submitting the form The following javascript is added to the page:        $(function () {          $('#formCreate').submit(function () {               PopulateHiddenDates();          });        });            function PopulateHiddenDates() {          var dateValues = '';          $($('#dateList').children('li')).each(function(index) {             dateValues += $(this).attr("id") + ",";          });          $('#hiddenDates').val(dateValues);        } I'm using jQuery to bind to the form submit event so that my method to populate the hidden field gets called before the form is submitted. The dateList element is an unordered list and by using the jQuery each function I can itterate through all the <li> items that it contains, get each items id attribute (to which I have assigned the value of the date in millisecs) and write them to the hidden field as a comma delimited string. 3. Process the dates on the server        [HttpPost]         public ActionResult Create(string hiddenDates, string utcOffset)         {            List<DateTime> dates = GetDates(hiddenDates, utcOffset);         }         private List<DateTime> GetDates(string hiddenDates, int utcOffset)         {             List<DateTime> dates = new List<DateTime>();             var values = hiddenDates.Split(",".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);             foreach (var item in values)             {                 DateTime newDate = new DateTime(1970, 1, 1).AddMilliseconds(double.Parse(item)).AddMinutes(utcOffset*-1);                 dates.Add(newDate);                }             return dates;         } By declaring a parameter with the same name as the hidden field ASP.Net will take care of finding the corresponding entry in the form collection posted back to the server and binding it to the hiddenDates parameter! Excellent! I now have my dates the user selected and I can save them to the database. I have also used the same technique to pass back a utcOffset so that I know what timezone the user is in and I can show the dates correctly to users in other timezones if necessary (this isn't strictly necessary at the moment but I plan to introduce times later), Saving multiple dates from an unordered list - DONE!

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • 12.04 installation started to black screen during boot today

    - by Cedric
    NOTE: Most of this question is now irrelevant. UPDATE 3 summarizes the problem as it stands. I've been running 12.04 on my Lenovo laptop for one month now (updated from 11.04), and I have not had any significant problem until today. This morning, when I boot, I pass the Grub screen, then I get to the purple loading screen with dots as usual, then for some reason I got to the terminal login, with no GUI. startx gives me a black screen. Ctrl+F7-F8 didn't help either. It's similar to: After the update today no graphical interface anymore - 12.04 I followed the instructions at the end, to flush the ATI drivers (which I had installed), and fall back to the community drivers. That made me lose the login! Now I just get a black screen after the Ubuntu loading screen. I can still access the console through recovery, and I've gotten into VESA mode once or twice (not reproducible, for some reason). I've tried various permutations of xorg.conf, without success. Xorg -configure fails for now, though I might be able to get it to work. apt-get update/upgrade doesn't improve anything either. However, both Windows and the 12.04 Live CD still work beautifully, and I know that all my data is still there. Is there any way that I could somehow take the configuration from the Live CD and roll with it? I know that I could reinstall, but that sucks, frankly, especially given that there's no straight-forward way of keeping the home (which, incidentally, is unaccessible from the Live CD) Thank you. Update: it seems that the fglrx drivers are still active, even after I've --purged them. From Xorg.0.log: [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] (WW) fglrx(0): * DRI initialization failed * [ 18.235] (WW) fglrx(0): * kernel module (fglrx.ko) may be missing or incompatible * [ 18.235] (WW) fglrx(0): * 2D and 3D acceleration disabled * [ 18.235] (WW) fglrx(0): *********************************************************** [ 18.235] Fatal server error: [ 18.235] AddScreen/ScreenInit failed for driver 0 There's also a mention of the "fbdev" module. What is it? PARTIALLY SOLVED: I've undone the damage from the fglrx purge. I'm still mystified as to why uninstalling the packages didn't kill fglrx entirely, but I've now recovered the prompt. The solution to the DRI initialization error was to add radeon.modeset=0 to the GRUB boot options. So I'm back to being dropped to a prompt without any GUI. startx gives me a bunch of messages, though no obvious errors. I have little reason to suspect the video drivers, as they worked fine before today. There is no apparent error message in any of the log files. UPDATE: When I startx, I get an error, Plymounth command failed mountall: Disconnected from Plymouth This is all over the Internet, but I have not found anything that works for me yet. UPDATE 3: If I press ESC during boot, the splash screen (Plymouth!) disappears, and I no longer have any error from Plymouth. The last error message is: Stopping mount filesystems on boot I can then Ctrl+Alt+F1 to get the TTY1, but startx still does not work. Sadly, the Internet knows nothing about this error message, and neither do I. Help!

    Read the article

  • New Training and Support Center Coming Soon!

    - by Ruth
    The CRM On Demand Training and Support Center is getting a face lift. In May 2010 we will unveil the new and improved layout, look and feel, and even some new content. Some of you told us loud and clear that you wanted an easier way to find our training courses and other important information. Well, here you are: Immediately you see the look and feel has changed and things have moved around a bit. You may ask, "How can I find the training catalog? Service requests? Downloads?" There are a few ways to find what you're looking for. You may use the search box to find training, quick guides, downloads, best practices, FAQs and more. You may also click the tabs or links in the blue bar, like Browse Training, to browse other documents and information. Here is a brief outline of the tabs and links that will help as you navigate this new tool: The Support tab provides alerts and notifications specific to your application environment. The Get Started tab is organized by role and contains links to resources aimed at helping you get the most out of your first 30 days with CRM On Demand. The Learn More tab outlines information in key topic areas, like administration, integration, and reports. Go to this tab to get the resources you need to move beyond the basics. The Release Information tab contains information specific to the current and upcoming releases of CRM On Demand. Access this tab to learn about and prepare for upgrades to your CRM On Demand application. The Best Practices tab contains a compilation of knowledge gained by experts that work with CRM On Demand day in and day out. Access this knowledge to benefit from their vast experience. The Communities tab offers connections to others in the CRM On Demand community through forums, communities, blogs, and more. The Browse training link opens the training catalog.Take a look at the instructor-led training, Webinars, quick guides, use cases, and tools available to you. The Browse Knowledge link takes you to our knowledge base where you can get answers to frequently asked questions. The Submit a Service Request link directs you to My Oracle Support where you can log a service request. The steps in that process have not changed. The Web Services Library provides simple APIs and a link to Oracle Sample Code where you can get samples that can help you build custom integrations. The Add-On Applications link allows access to our downloadable applications that allow you to extend the functionality of CRM On Demand. The Templates and Tools link provides access to resources that can help you design and build CRM On Demand to meet your company's specific needs. A lot has changed and I know it is a lot to take in. To help you out, we have a printable quick guide that you can use during this transition. As always, let us know what you think: [email protected].

    Read the article

  • Monogame/SharpDX - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter as it appears to be missing. Edit I have also tried 'Texture2D' and using it with a register(t0), still no luck I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Edit This seems to go back to D3D compiler. Shader code below: #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated. Update 1 I have discovered that SharpDX D3D compiler is what seems to be ignoring this parameter (perhaps by design?). The ConstantBufferDescription.VariableCount seems to be not counting the texture variable. Update 2 SharpDX function 'GetConstantBuffer(int index)' returns the parameters (minus textures) which is making is impossible to set values to these variables within the shader. Any one know if this is normal for DX11 / Shader Model 4.0? Or am I missing something else?

    Read the article

  • Integrating JavaFX Scene Builder in the IDEs

    - by Jerome Cambon
    I experienced recently using Scene Builder from Netbeans, Eclipse and IntelliJ IDEA. As you may know, Scene Builder is a standalone tool, that can be used independently of any IDE. But it can be very convenient to use it with your favorite IDE, for instance start it by double-clicking on an FXML file, or run samples delivered with Scene Builder.  I'm sharing here with you few tweaks that I had to do for a better integration. Scene Builder 1.1 Developer Preview should be installed before doing the tweaks. The steps below have been done on Windows 7. It should be very similar on both Mac OS and Linux. Please tell me if you find any issue on one of these 2 platforms. Netbeans 7.3 Netbeans 7.3 can be downloaded from here. Creating a New FXML project Part of the JavaFx projects, Netbeans allows to create a 'JavaFX FXML Application', that creates a JavaFx project based on FXML description. The FXML file will be editable with Scene Builder. Starting Scene Builder from Netbeans If SceneBuilder 1.1 is installed, Netbeans will discover it automatically.In case of issue, one can open the Options panel, Java section, JavaFx tab. Scene Builder home should appear here. You can then either Open the FXML file with Scene Builder, or edit it with the Netbeans FXML editor : When 'Open' is selected, Scene Builder appears on top of the Netbeans window : When 'Edit' is selected, the FXML is opened in the Netbeans FXML editor, which support syntax highlighting and completion : Using Scene Builder Samples Scene Builder provides Netbeans projects, that can be opened/run directly : Eclipse 4.2.1 + e(fx)clipse 0.1.1 JavaFX integration in Eclipse has been done with the e(fx)clipse plugin. A distribution bundle containing Eclipse and e(fx)clipse is provided here. Creating New FXML project All the JavaFX-related projects can be found in 'Other' section : First create a new JavaFX project: Enter the project name (Test here). JavaFX delivery will be found in the JRE. Then, create a 'New FXML Document': Enter the FXML file name (Sample here). You may also want to choose the FXML document root element (AnchorPane by default). Dynamic root is for advanced users which want to manage custom types. Starting Scene Builder from Eclipse Once created, you can then either Open the FXML file with Scene Builder, or Open it in the Eclipse FXML editor : Using Scene Builder Samples from Eclipse To use Scene Builder samples, first create a new JavaFX Project (from 'Other' section): Then, on the next panel, 'Link additionnal source': … and select the source directory of a Scene Builder example : HelloWorld here (the parent directory of the java package should be selected).Then, choose a 'Folder name' for your sample: You can now run the Scene Builder example by right-clicking the Main.java source file: IntelliJ IDEA 11.1.3 IntelliJ IDEA Community Edition can be downloaded from here. IntelliJ IDEA has no specific JavaFX integration. Creating New IntelliJ project from existing source Since IntelliJ has no JavaFX project knowledge, we are using the Scene Builder samples as a starting point. We are going to create a new Java project from the HelloWorld sample: Then, click twice on 'Next' (nothing to change), then 'Finish'. The 'HelloWorld' project is created. Starting Scene Builder from IntelliJ We need to tell the IDE that FXML files are opened with an external application. Then, the OS file association will be used. To do this, open the File->Settings panel. Then, select 'File Types' and 'Files opened in associated applications'. And add a new wildcard : '*.fxml' : Now, from the HelloWorld project, you can double-click on HelloWorld.fxml : Scene Builder window appears on top of the IntelliJ window : Using Scene Builder Samples from IntelliJ We need to tell IntelliJ that the fxml files must be copied in the build directory.To do that, from the HelloWorld directory, open the 'idea' section, and edit the 'compiler.xml' file. We need to add an '*.fxml' entry: Then, you can run the sample from HelloWorld project, by right-clicking the Main class:

    Read the article

  • How to pad number with leading zero with C#

    - by Jalpesh P. Vadgama
    Recently I was working with a project where I was in need to format a number in such a way which can apply leading zero for particular format.  So after doing such R and D I have found a great way to apply this leading zero format. I was having need that I need to pad number in 5 digit format. So following is a table in which format I need my leading zero format. 1-> 00001 20->00020 300->00300 4000->04000 50000->5000 So in the above example you can see that 1 will become 00001 and 20 will become 00200 format so on. So to display an integer value in decimal format I have applied interger.Tostring(String) method where I have passed “Dn” as the value of the format parameter, where n represents the minimum length of the string. So if we pass 5 it will have padding up to 5 digits. So let’s create a simple console application and see how its works. Following is a code for that. using System; namespace LeadingZero { class Program { static void Main(string[] args) { int a = 1; int b = 20; int c = 300; int d = 4000; int e = 50000; Console.WriteLine(string.Format("{0}------>{1}",a,a.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", b, b.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", c, c.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", d, d.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", e, e.ToString("D5"))); Console.ReadKey(); } } } As you can see in the above code I have use string.Format function to display value of integer and after using integer value’s  ToString method. Now Let’s run the console application and following is the output as expected. Here you can see the integer number are converted into the exact output that we requires. That’s it you can see it’s very easy. We have written code in nice clean way and without writing any extra code or loop. Hope you liked it. Stay tuned for more.. Till than happy programming.

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >