Search Results

Search found 23236 results on 930 pages for 'content strategy'.

Page 110/930 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • selecting more than 1 table to get content from [php/Mysql]

    - by SAFSOF
    Hi There i want to get the latest threads/messages i made my code then i made function that calls the code to show last messages in specific board it works great now i want to get latest messages from 2 boards or more in the same function this is the part that chooses the board AND b.id_board = t.id_board' . (empty($vars) ? '' : ' AND b.id_board = ' . $vars . ''). ' i tried to use functionname(1.2.3); but he says no board with id 1.2.3 i tried ("1,2,3") same i wish i made it clear for you i apreciate your help

    Read the article

  • How Can I Create A Featured Content Area?

    - by ThatMacLad
    I'm working on a blog and I'd love to create a homepage with a featured post image area. I'd like it to switch between 2-3 of the latest posts images. I was wondering how I would go about doing this but so that I also have a form to update it rather than constantly editing my code.

    Read the article

  • mcsCustomscrollbar append new content not working

    - by Dariel Pratama
    i have this script in my website. $(document).ready(function(){ var comment = $('textarea[name=comment_text]').val(); var vid = $('input[name=video_id]').val(); $('#add_comment').on('click', function(){ $.post('<?php echo site_url('comment/addcomments'); ?>', $('#comment-form').serialize(), function(data){ data = JSON.parse(data); if(data.userdata){ var date = new Date(data.userdata.comment_create_time * 1000); var picture = data.userdata.user_image.length > 0 ? 'fileupload/'+data.userdata.user_image:'images/no_pic.gif'; var newComment = '<div class="row">\ <div class="col-md-4">\ <img src="<?php echo base_url(); ?>'+picture+'" class="profile-pic margin-top-15" />\ </div>\ <div class="col-md-8 no-pade">\ <p id="comment-user" class="margin-top-15">'+data.userdata.user_firstname+' '+data.userdata.user_lastname+'</p>\ <p id="comment-time">'+date.getTime()+'</p>\ </div>\ <div class="clearfix"></div>\ <div class="col-md-12 margin-top-15" id="comment-text">'+data.userdata.comment_text +'</div>\ <div class="col-md-12">\ <div class="hr-grey margin-top-15"></div>\ </div>\ </div>'; $('#comment-scroll').append($(newComment)); $('#comment').modal('hide'); } }); }); }); what i expect when a comment added to the DB and the PHP page give JSON response, the new comment will be added to the last line of $('#comment-scroll'). #comment-scroll is also have custom scroll bar by mcsCustomscrollbar. the above script also hiding the modal dialog when comment saved and it's working fine which is mean data.userdata is not empty, but why the append() isnt?

    Read the article

  • Input labels for dynamically loaded HTML content

    - by treznik
    I have a form that is dynamically loaded through AJAX and dumped with innerHTML inside an area of a website. The problem I seem to encounter is that the labels don't seem to point to the inputs they're tied to, when clicked. I'm sure the markup is correct because if I open that form html code separately the labels work. Is there a solution to re-initialize somehow this sort of event natively? I'm not looking into simulating the behavior with JS. Thanks!

    Read the article

  • get html content of next element with a defined classname

    - by Carasuman
    i have problems with jquery.. If i have this code to fill tooltip div <div id="tooltip"></div> <div class="tooltipMessage">Tooltip Message</div> <script type="text/javascript"> $("#id").html($("#tooltip").next(".tooltipMessage").html()); </script> Works, the code takes the next element with classname "tooltipMessage" and fills element with id "tooltip". But if i have this code: <div id="tooltip"></div> <p>other element</p> <div class="tooltipMesage">Tooltip Message</div> Returns "undefined". How i can take html from the next element with classname "tooltipMessage" if exists another element in middle ? Thanks for help!.

    Read the article

  • remove the content in directory and subdirectory hierarichally with out distroy the directory structure

    - by user3713876
    In shell script, I want to clear only text files and log files in the following structure with out removing the directory as well as subdirectories | |------bar/ | |---file1.txt |---file2.txt | |---subdir1/ | |---file1.log | |---file2.log | |---subdir2/ |---image1.log |---image2.log I am using rm -rf /bar/* so I am getting the result as follows. |------bar/ but I want the output like following | |------bar/ | | | | |---subdir1/ | | | |---subdir2/ I want to remove only text files or log files or csv with out removing the directory and the subdirectories

    Read the article

  • Simple: replace div with ajax content (jquery)

    - by user469110
    I followed this thread. I now have: <a href="#" onclick="$('#gc').load('test');">reload</a>... </span> <div id="gc"> empty </div> This is what I am getting: Uncaught exception: TypeError: Cannot convert '$('#gc')' to object Error thrown at line 1, column 0 in <anonymous function>(event): $('#gc').load('test'); What is that? I thought I would be able to select a div and replace the contents with load()?

    Read the article

  • How is my EditText content being saved?

    - by hwexler2
    I created a simple app that has nothing except an EditText element. When I run the app, I type text into the element and then press Ctrl-F11 to change the emulator's orientation. I've added logging information to make sure that the activity gets destroyed and re-created when I change orientation. I haven't added any code to save the text in the EditText element and yet, after the change of orientation, the text that I typed stays in the EditText element. What mechanism in Android is saving and then restoring the element's text (is it savedInstanceState) and how can I see for myself the details of this saving operation?

    Read the article

  • Error when selecting content from ADDTIME(CURTIME(), '14400 hour') format

    - by Blahwhore
    So apparently i've stumbled upon a coding error when trying to select the time from my database. SELECT * FROM `videos` WHERE `added_time` > AddTime( CurTime(), '14400 hour' ) is the code, i'm trying to select all the videos posted 10 days (14400 hours) ago using the "added_time" format, because it worked for my previous coding but in this one it work work. Shown below is a link to the image showing how my database structure for videos are shown. http://i.imm.io/NURT.png Edit: Previously i had this problem for retrieving and deleting bulletins posted 10 days ago, and this code worked, however this code apparently won't work when trying to retrieve the videos :/ I don't know why, they're using the same format. See: http://i.imm.io/NUSW.png

    Read the article

  • Growing Into Enterprise Architecture

    - by pat.shepherd
    I am writing this post as I am in an Enterprise Architecture class, specifically on the Oracle Enterprise Architecture Framework (OEAF).  I have been a long believer that SOA’s key strength is that it is the first IT approach that blends or unifies business and technology.  That is a common view and is certainly valid but is not completely true (or at least accurate).  As my personal view of EA is growing, I realize more than ever that doing EA is FAR MORE than creating a reference architecture, creating a physical architecture or picking a technology to standardize on.  Those are parts of the puzzle but not the whole puzzle by any stretch. I am now a firm believer that the various EA frameworks out there provide the rigor and structure required to allow the bridging of business strategy / vision to IT strategy / vision. The flow goes something like this: Business Strategy –> Business / Application / Information / Technology Architecture –> SOA Reference Architecture –> SOA Functional Architecture.  Governance is imbued throughout to help map, measure and verify the business-to-IT coherence. With those in place, then (and only then) can SOA fulfill it’s potential to be more that an integration strategy, more than a reuse strategy; but also a foundation for tying the results of IT to business vision. Fortunately, EA is a an ongoing process that it is never too late to get started with an understanding of frameworks such as TOGAF, FEA, or OEAF.  Also, EA is never ending in that it always needs to be apply, even once a full-blown Enterprise Architecture is established it needs to be constantly evolved.  For those who are getting deeper into EA as a discipline, there is plenty runway to grow as your company/customer begins to look more seriously at EA. I will close with a pointer to a Great Book I have recently read on this subject: Enterprise Architecture as Strategy (http://www.amazon.com/Enterprise-Architecture-Strategy-Foundation-Execution/dp/1591398398/ref=sr_1_1?ie=UTF8&s=books&qid=1268842865&sr=1-1)

    Read the article

  • Say What? Podcasting As Part of Your Content Marketing

    - by Mike Stiles
    What do you usually do in your car on the way to work?  Sing along to radio? Stream Pandora or iHeartRadio? Talk on the phone? Sit in total silence? Whatever it is you do, you could be using that time to make yourself an expert in any range of topics…using podcasts. We invite you to follow or subscribe to the daily Oracle Social Spotlight podcast, a quick roundup of the day’s top stories around social marketing and the social networks. After podcasts arrived in 2004, growth was steady but slow. The concept was strong: anyone with a passion for any subject could make a show for anyone who cared to listen. Enter the smartphone, iTunes, new podcasting platforms, and social, and podcasting became easier than ever and made more sense for both podcasters and listeners. Stats show 1 in 5 smartphone owners are podcast consumers and 29% of Americans have listened to a podcast. The potential audience is also larger than ever. “Baked in” podcast apps on over 200 million devices expose users to volumes of audio content with just a tap. 97 million Americans are driving to work every day by themselves. And 38% of Americans listen to audio on a digital device each week, a number that’s projected to double by 2015. Does that mean your brand should be podcasting? That’s part of a larger discussion about your overall content strategy, provided you have one. But if you do and podcasting is a component of it, here are some things to keep in mind: Don’t podcast just to do it. Podcast because you thought of a show customers and prospects will like that they can’t get anywhere else. Sound quality matters. Good microphones are not expensive. Bad sound is annoying, makes your brand feel cheap, and will turn today’s sophisticated ears off. The host matters. Many think they belong on the radio. Few actually do. Your brand’s host should be comfortable & likeable. A top advantage of a podcast is people can bond with a real person. It’s a trust opportunity, so don’t take it lightly. The content matters. “All killer, no filler” means don’t allow babbling just to fill enough time for an episode. Value the listeners’ time, because that time is hard to get. Put time, effort and creativity into it. Sure you’re a business, but you’re competing with content from professional media and showbiz producers. If you can include music, sound effects, and things that amuse the ears, do it. If you start, be consistent. The #1 flaw in podcasting is when listeners can’t count on another episode or don’t know when it’s coming. Don’t skip doing shows just because you can. Get committed. Get your cover art right. Podcasting is about audio, but people shop for podcasts by glancing through graphics. Yours has to be professional, cool, and informative to get listeners interested. Cross-promote your podcast on all your channels. The competition for listeners is fierce, so if you have existing audiences you can leverage to launch your show, use them. Optimize it for mobile. Assume that’s where most listening will take place. If you’re using one of the podcast platform apps, you should be in good shape. Frankly, the percentage of brands that are podcasting is quite low, and that’s okay. Once you move beyond blogging and start connecting with real voices, poor execution can do damage. But more (32%) marketers want to learn how to use podcasting, and more (23%) were increasing their podcasting throughout this year. Bottom line, you want to share your brand’s message and stories wherever your audience might be and in whatever way they prefer to take in content. Many prefer to do that while driving or working out, using the eyes and hands-free medium of audio. @mikestilesPhoto: stock.xchng

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What strategy to use when starting in a new project with no documentation?

    - by Amir Rezaei
    Which is the best why to go when there are no documentation? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Google+1 button strategy - Combined +1s or separate +1s?

    - by nctrnl
    I have included the Google+1 button on my blog. Each post outputs a +1 button on the bottom. Depending if you are viewing the actual post or just the main page the +1 button will "+1" either the post address or blog website address. This made me think for a bit if the +1 button should be configured to +1 the blog section (www.example.org/blog), +1 the main website address (www.example.org), or +1 individual posts?

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Best strategy (tried and tested) for using Box2D in a real-time multiplayer game?

    - by Simon Grey
    I am currently tackling real-time multiplayer physics updates for a game engine I am writing. My question is how best to use Box2D for networked physics. If I run the simulation on the server, should I send position, velocity etc to every client on every tick? Should I send it every few ticks? Maybe there is another way that I am missing? How has this problem been solved using Box2D before? Anyone with some ideas would be greatly appreciated!

    Read the article

  • Help parsing long (3.5mil lines) text file, line by line and storing data, need a strategy

    - by Jarrod
    This is a question about solving a particular problem I am struggling with, I am parsing a long list of text data, line by line for a business app in PHP (cron script on the CLI). The file follows the format: HD: Some text here {text here too} DC: A description here DC: the description continues here DC: and it ends here. DT: 2012-08-01 HD: Next header here {supplemental text} ... this repeats over and over for a few hundred megs I have to read each line, parse out the HD: line and grab the text on this line. I then compare this text against data stored in a database. When a match is found, I want to then record the following DC: lines that succeed the matched HD:. Pseudo code: while ( the_file_pointer_isnt_end_of_file) { line = getCurrentLineFromFile title = parseTitleFrom(line) matched = searchForMatchInDB(line) if ( matched ) { recordTheDCLines // <- Best way to do this? } } My problem is that because I am reading line by line, what is the best way to trigger the script to start saving DC lines, and then when they are finished save them to the database? I have a vague idea, but have yet to properly implement it. I would love to hear the communities ideas\suggestions! Thank you.

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • What strategy to use when starting in a new project with no documentations?

    - by Amir Rezaei
    Which is the best why to go when there are no documentations? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can se relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Search Engine Optimization Services - Offering Cost-Effective Marketing Strategy!

    It's the increasing popularity for the Internet that is exactly propelling more and more businessmen to announce their website in order to draw customers around the world. These days, the Internet has been considered as the most effective marketing and product promotion platform that can take business for new heights. If you are all set to get some international clients, then its time to opt for the SEO services.

    Read the article

  • Best strategy for supporting multiple server communication from iPhone/android app?

    - by tipycalFlow
    I'm making an app that will be used in multiple hospitals in the US. As per HIPAA compliance requirement, every hospital will have its own server that complies with these requirements of ensuring patient data security, etc. Now the task is that the app should communicate with a particular server based on the login info. An additional requirement is that new hospitals(servers) are likely to be added along the way, even after the app is available on the market. So basically, according to some login credentials, the app should communicate with the server of the hospital assigned to that person. One pretty crude way is to set up our own server which links the hospitals with the login info and accordingly, provides a base-url for data exchange. Is there a more efficient way to handle this?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Is there a better strategy than relying on the compiler to catch errors?

    - by koan
    I've been programming in C and C++ for some time, although I would say I'm far from being an expert. For some time, I've been using various strategies to develop my code such as unit tests, test driven design, code reviews and so on. When I wrote my first programs in BASIC, I typed in long blocks before finding they would not run and they were a nightmare to debug. So I learned to write a small bit and then test it. These days, I often find myself repeatedly writing a small bit of code then using the compiler to find all the mistakes. That's OK if it picks up a typo but when you start adjusting the parameters types etc just to make it compile you can screw up the design. It also seems that the compiler is creeping into the design process when it should only be used for checking syntax. There's a danger here of over reliance on the compiler to make my programs better. Are there better strategies than this? I vaguely remember some time ago an article on a company developing a type of C compiler where an extra header file also specified the prototypes. The idea was that inconsistencies in the API definition would be easier to catch if you had to define it twice in different ways.

    Read the article

  • Branching strategy for parallel development that won't be in the same release?

    - by Telastyn
    My team is working on a product, which for business reasons needs to be released on a regular schedule. An issue has arisen where we want to do development in parallel for the upcoming release, as well as the 'next' release. This is to become standard practice, so it's not as straightforward as cutting a feature branch for the new work. We'll continually have 2+ teams working on different releases of the same product. Is there an SCM best practice for this sort of arrangement?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >