Search Results

Search found 12919 results on 517 pages for 'tool pack'.

Page 452/517 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Turning off hibernate logging console output

    - by Jared
    I'm using hibernate 3 and want to stop it from dumping all the startup messages to the console. I tried commenting out the stdout lines in log4j.properties but no luck. I've pasted my log file below. Also I'm using eclipse with the standard project structure and have a copy of log4j.properties in both the root of the project folder and the bin folder. ### direct log messages to stdout ### #log4j.appender.stdout=org.apache.log4j.ConsoleAppender #log4j.appender.stdout.Target=System.out #log4j.appender.stdout.layout=org.apache.log4j.PatternLayout #log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n ### direct messages to file hibernate.log ### log4j.appender.file=org.apache.log4j.FileAppender log4j.appender.file.File=hibernate.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n ### set log levels - for more verbose logging change 'info' to 'debug' ### log4j.rootLogger=warn, stdout #log4j.logger.org.hibernate=info log4j.logger.org.hibernate=debug ### log HQL query parser activity #log4j.logger.org.hibernate.hql.ast.AST=debug ### log just the SQL #log4j.logger.org.hibernate.SQL=debug ### log JDBC bind parameters ### log4j.logger.org.hibernate.type=info #log4j.logger.org.hibernate.type=debug ### log schema export/update ### log4j.logger.org.hibernate.tool.hbm2ddl=debug ### log HQL parse trees #log4j.logger.org.hibernate.hql=debug ### log cache activity ### #log4j.logger.org.hibernate.cache=debug ### log transaction activity #log4j.logger.org.hibernate.transaction=debug ### log JDBC resource acquisition #log4j.logger.org.hibernate.jdbc=debug ### enable the following line if you want to track down connection ### ### leakages when using DriverManagerConnectionProvider ### #log4j.logger.org.hibernate.connection.DriverManagerConnectionProvider=trac5

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • crawl websites out of java web application without using bin/nutch

    - by Marcel
    hi :) i am trying to using nutch (1.1) without bin/nutch from my (java) mojarra 2.0.2 webapp... i am searching at google for examples, but there are no examples how i can realize this :/ ... i get an exception and the job fails :/ (i think of cause something with hadoop)... here is my code: public void run() throws Exception { final String[] args = new String[] { String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_URLS), "-dir", String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_CRAWL), "-threads", this.preferences.get("threads"), "-depth", this.preferences.get("depth"), "-topN", this.preferences.get("topN"), "-solr", this.preferences.get("solr") }; Crawl.main(args); } and a part of the logging: 10/05/17 10:42:54 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 10/05/17 10:42:54 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:54 INFO mapred.JobClient: Running job: job_local_0001 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:55 INFO mapred.MapTask: numReduceTasks: 1 10/05/17 10:42:55 INFO mapred.MapTask: io.sort.mb = 100 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.crawl.Injector.inject(Injector.java:211) at org.apache.nutch.crawl.Crawl.main(Crawl.java:124) at lan.localhost.process.NutchCrawling.run(NutchCrawling.java:108) at lan.localhost.main.Index.indexing(Index.java:71) at lan.localhost.bean.FeedingBean.actionStart(FeedingBean.java:25) .... can someone help me or tell me how i can crawling from a java application? i have increased the Xms to 256m and Xmx to 768m, but nothing changed... best regards marcel

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • How to open a document using an application launched via NSTask?

    - by zneak
    Hello world, I've grown tired of the built-in open Mac OS X command, mostly because it runs programs with your actual user ID instead of the effective user ID; this results in the fact sudo open Foo opens Foo with its associated application with your account instead of the root account, and it annoys me. So I decided to make some kind of replacement. So far I've been successful: I can open any program under the open -a or open -b fashion, and support optionally waiting. I'm using NSTask for that purpose. However, I'd like to be able to open documents too. As far as I can see, you need to use NSWorkspace for that, but using NSWorkspace to launch programs results in them being launched with your account's credentials instead of your command line program's credentials. Which is precisely what the default open tool does, and precisely what I don't want. So, how can I have a program request that another program opens a document without using NSWorkspace? From the NSTask object, I can have the process ID, but that's about it.

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • Configuring 32-Bit ASP.NET Application on a 64-Bit IIS Server

    - by Tim
    Hello, I’m trying to install a 32-bit ASP.NET application onto a 64-bit IIS server running on Windows Server 2008. This is a clean installation of the operating system with no other applications installed. As a prerequisite for our installation, we run the 32-bit version of aspnet_regiis –i It fails with the following message: The error indicates that IIS is not installed on the machine. Please install IIS before using this tool. Additionally: IIS is definitely installed. The 64 bit version of aspnet_regiis runs cleanly without warnings. “Enable 32 bit applications” is set to True in the DefaultAppPool’s Advanced Settings. The IIS Metabase and IIS 6 configuration compatibility” component is installed. We have a test VM where this error occurs as well as test VM where both the 32 bit and 64 bit versions of aspnet_regiis run without errors. We've had no luck distinguishing the differences between the two test VMs. We have struggled with this issue for several days to no avail. Any suggestions would be greatly appreciated!

    Read the article

  • To have an Integer pointing to 3 ordered lists in Java

    - by Masi
    Which datastructure would you use in the place of X to have efficient merges, sorts and additions as described below? #1 Possible solution: one HashMap to X -datastructure Having a HashMap pointing from fileID to some datastructure linking word, wordCount and wordID may be a good solution. However, I have not found a way to implement it. I am not allowed to use Postgres or any similar tool to keep my data neutralized. I want to have efficient merges, sorts and additions according to fileID, wordID or wordCount for the type below. I have the type Words which has th field fileID that points to a list of words and to relating pieces of information: The Type class Words =================================== fileID: int [list of words] : ArrayList [list of wordCounts] : ArrayList [list of wordIDs] : ArrayList Example of the data in fileID word wordCount wordID instance1 of words 1 He 123 1111 1 llo 321 2 instance2 of words 2 Van 213 666 2 cou 777 932 Example of needed merge fileID wordID fileID wordID 1 2 1 3 wordID=2 2 2 ========> 1 2 2 3 2 2 I cannot see any usage of set-operations such as intersections here because order is needed. Having about three HashMaps makes sorting difficult: from word to wordID in a given fileID from wordID to fileID from wordID to wordCount in a given fileID

    Read the article

  • Suggest joomla html editor extension/software

    - by DMin
    I've started using Joomla 1.5 recently and am using the TinyMCE online WYSIWYG editor that comes with the package to edit articles. I tend to write direct html and javascript rather than use the WYSIWYG functions, I find that after the first time the changes are applied(page updated) most of your html becomes 4-5 separate big paragraphs. Its very hard to find stuff in there cause the content has no formatting -- eg: <p><span id="psy_ass_span" class="pink_heading">Psychometric Assessment</span></p> <div id="psy_ass_div" class="pink_box"><img class="img_right" src="templates/teamwork.jpg" border="0" /> <p><strong>Emporkommen</strong> uses <strong>Psychometric assessment</strong> as a tool in order to gain insight into a person’s personality and psychological thinking. It can help develop team spirit in t <script src="plugins/editors/tinymce/jscripts/tiny_mce/themes/advanced/langs/en.js" type="text/javascript"></script> he workplace and assess an individual’s priorities.</p> Plus obviously there is no code highlighting in the editor so you can't figure out what is what. My question is, do you guys know of good(preferably non-commercial) extensions or other softwares or techniques that can make editing html code in Joomla 1.5 articles easier even after applying changes several times.

    Read the article

  • dlopen / dlsym with as little linking as possible

    - by johannes
    I have an application which can make use of plugins which are loaded at runtime using dlopen. Each of the plugins defines a function toretrieve the plugin information which is defined using a common structure. Something like that: struct plugin { char *name; char *app_version; int app_verion_id; char *plugin_version; int plugin_version_id; /* ... */ }; struct plugin p = { "sample plugin",APP_VERION,APP_VERSION_ID,"1.2.3",10203 }; struct plugin *get_plugin() { return &p; } This works well and plugins can be loaded. Now i want to build a small tool to read these properties without linking the whole application. For doing that I have some code like this: void *handle; struct plugin *plugin; struct plugin *(get_plugin*)(); handle = dlopen(filename, RTLD_LAZY); if (!handle) { /*...return; ...*/ } get_plugin = dlym(handle, "get_plugin"); if (!get_plugin) { /*...return; ...*/ } plugin = get_plugin(); printf("Plugin: %s\n", plugin->name); This works nice for simple plugins. The issue is that many plugins reference further symbols from the application, which are resolved even though RTLD_LAZY was set. (like global variables from the application which are used to initialize plugin-global things) So the dlopen() call fails with an error like fatal: relocation error: file sample_plugin.so: symbol application_some_symbol: referenced symbol not found. As I just want to have access to the single simple structure I was wondering how I can prevent the linker from doing that much of his work.

    Read the article

  • full duplex communication over the web w/o flash sockets

    - by aharon
    A web application I'm helping to develop is faced with a well-known problem: we want to be able to let users know of various events and so forth that can occur at any time, essentially at random, and update their view accordingly. Essentially, we need to allow the server to push requests to individual clients, as opposed to the client asking the server. I understand that WebSockets are an effort to address the problem; however, after a bit of looking around into them, I understand that a) very few web browsers currently offer native websocket support; b) to get around this, you either use flash sockets or some sort of AJAX long-polling; c) a special websockets server must be used. Now, we want to offer our service without Flash. And any sort of servers must have some sort of load-balancing capabilities, or at least some software that can do load balancing for them. As of 2008, everyone was saying that Comet-based solutions (e.g., Bayeux) were the way to go for these sort of situations. However, the various protocols seem to have not had much work put into them since then—which leads (finally) to the question. Is Bayeux-flavored Comet still the right tool for jobs like this? If not, what is?

    Read the article

  • Why isn't my UITableView in a popover appearing in the correct scroll position?

    - by zbrimhall
    I have a split view-based app that presents a master-detail interface, and uses a popover to present the master list when in portrait mode. The popover presents a sectioned table view that ultimately gets populated by a subclass of NSFetchedResultsController. I can tap the tool bar button to present the master list, scroll to whatever row, and tap the row to dismiss the popover. My problem is that if the table is scrolled past the top of the second section, when I dismiss the popover and then later tap the toolbar button to re-present it, the table's scroll position is always set such that the first row of the second section is at the top of the list. If I haven't scrolled past the top of the second section, it correctly remembers its scroll position when the table is presented again. Similarly, in landscape mode, if I scroll the table past the top of the third section and then rotate to portrait, when I come back to landscape the scroll position is always set such that the first row of the third section is at the top of the list. I tried calling -scrollToNearestSelectedRowAtScrollPosition:animated in both the master view controller's -viewWillAppear, as well as in the split view delegate's splitViewController:popoverController:willPresentViewController:, to no effect. Anybody have a clue what I might be doing wrong?

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Loading and binding a serialized view model to a WPF window?

    - by generalt
    Hello all. I'm writing a one-window UI for a simple ETL tool. The UI consists of the window, the code behind for the window, a view model for the window, and the business logic. I wanted to provide functionality to the users to save the state of the UI because the content of about 10-12 text boxes will be reused between sessions, but are specific to the user. I figured I could serialize the view model, which contains all the data from the textboxes, and this works fine, but I'm having trouble loading the information in the serialized XML file back into the text boxes. Constructor of window: public ETLWindow() { InitializeComponent(); _viewModel = new ViewModel(); this.DataContext = _viewModel; _viewModel.State = Constants.STATE_IDLE; Loaded += new RoutedEventHandler(MainWindow_Loaded); } XAML: <TextBox x:Name="targetDirectory" IsReadOnly="true" Text="{Binding TargetDatabaseDirectory, UpdateSourceTrigger=PropertyChanged}"/> ViewModel corresponding property: private string _targetDatabaseDirectory; [XmlElement()] public string TargetDatabaseDirectory { get { return _targetDatabaseDirectory; } set { _targetDatabaseDirectory = value; OnPropertyChanged(DataUtilities.General.Utilities.GetPropertyName(() => new ViewModel().TargetDatabaseDirectory)); } Load event in code behind: private void loadState_Click(object sender, RoutedEventArgs e) { string statePath = this.getFilePath(); _viewModel = ViewModel.LoadModel(statePath); } As you can guess, the LoadModel method deserializes the serialized file on the user's drive. I couldn't find much on the web regarding this issue. I know this probably has something to do with my bindings. Is there some way to refresh on the bindings on the XAML after I deserialize the view model? Or perhaps refresh all properties on the view model? Or am I completely insane thinking any of this could be done? Thanks.

    Read the article

  • autoconf libtool library linker path incorrect (need drive-letter) for MinGW ld.exe in Cygwin

    - by Tam Toucan
    I use autoconf and when the target is mingw I was using the -mno-cygwin flag. This has been removed so I'm trying to using the mingw tool chain. The problem is the linker isn't finding my libraries /bin/sh ../../../libtool --tag=CXX --mode=link mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -L/usr/local/lib/w32api -o testRandom.exe testRandom.o -L../../../lib/Random -lRandom libtool: link: mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -o .libs/testRandom.exe testRandom.o -L/usr/local/lib/w32api -L/home/Tam/src/3DS_Games/lib/Random -lRandom D:\cygwin\opt\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lRandom To link this from the command line using the mingw linker the -L path needs the drive letter i.e mingw32-ld testRandom.o -LD:/home/Tam/src/3DS_Games/lib/Random -lRandom works. The -L path is generated from the makefile.am's which have LDADD = -L$(top_builddir)/lib/Random -lRandom However I can't find how to set top_builddir to a relative path or to start it with the drive letter (my autoconf skills are weak). As a tempoary "solution" I have removed the use of libtool. I could hack a $(DRIVE_LETTER) infront of every -L option, but I'd like to find something better.

    Read the article

  • Switching from Java to .NET from a career change point of view

    - by Joe
    Could anyone share with me their experience with switching from Java to .NET from a career point of view? I've been a Java developer for 12 years and am just getting tired of how fragmented the Java world has become. For my liking, there's just too many frameworks, tools, application servers, etc.. And it seems each new tool just adds complexity and time to even the simplest of projects. I'm not trying to start any wars - I'm just giving you the reason I ask the main question. I've read a few books on .NET and have done one WebForms job. I love the integrated environment and would like to hear how others transitioned from Java to .NET. What I mean by that is did you do it somehow as a contractor or did you join a company as a beginner .NET developer with much Java experience? Personally, I'm ready to take the leap if I can figure out how to not lose too much income in the process (Senior Java developer to beginner .NET developer). I would really appreciate hearing your stories.

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • Which parts of Sharepoint do I need to understand to build a publicly facing website?

    - by Petras
    I am building a publicly facing website that does the following. Users log in. And then view a list of their customers. They click on a customer to view their past purchases, order them, change them etc. This is not a shopping site by the way. It is a simple look up tool. Note that none of the data accessed by the website is in anything other than a SQL database - no office documents. Also, the login does not use users Windows credentials on a VPN or something like that. Typically I would build this using a standard ASP.NET MVC website. However the client says they want to use Sharepoint. As I understand it, Sharepoint is used for workflow and websites that are collaboration tools such as the components you can see here http://www.sharepointhosting.com/sharepoint-features.html Here are my questions: Would I be right in saying that WSS is completely inappropriate for this task as it comes with an overhead that provides no benefits? If I had to use it, would I need WSS or MOSS? If I had to use it, would I be right in saying the site would consist of : List item a) Web Parts b) And a custom site layout. How do I create one of these?

    Read the article

  • Sorting, Filtering and Paging in ASP.NET MVC

    - by ali62b
    What is the best approach to implement these features and which part of project would involved? I see some example of JavaScript grids, but I'm talking about a general approach which best fits the MVC architecture. I've considered configuring routes and models to implement these features but I don't have a clear idea that if this is the right approach to implementing such features. On the one hand, I think if we put logic in routes (item/page/sort/), we would have benefits like bookmarking and avoiding JavaScript. On the other hand if we use JavaScript grids, we can have behavior like the old school grid views in ASP.NET web forms. I find that using HTML helpers may be useful for paging, but have no idea if they are good for sorting or not. I've looked at jQuery, tableSorter and quick search plug-ins, but they work just on the currently-fetched data and won't help in real sorting and filtering that may need to touch the database. I have some thoughts on using these tools side by side with AJAX to get something which works, but I have no idea if there are similar efforts done yet anywhere. Another approach I looked at was using Dynamic Data on web forms, but I didn't find any suggestions out there as to whether or not it is a good idea to integrate MVC and DD. I know implementing filtering and sorting for an individual case is simple (although it has some issues like using Dynamic LINQ, which is not yet a standard approach), but creating a sorting or filtering tool which works in all cases is the idea I'm looking for. (Maybe this is because I want have something in hand when web form developers are wondering why I'm writing same code each time I want to implement a sort scenario for different Entities).

    Read the article

  • using Autofac in a multi-layered architecture

    - by Kamyar
    I'm fairly new to the DI/IoC concept and would like to use Autofac in a 3-layered ASP.NET Webforms application. UI layer: An ASP.NET webforms website. BLL: Business logic layer which calls the repositories on DAL. DAL: .EDMX file (Entity Model) and ObjectContext with Repository classes which abstract the CRUD operations for each entity. Entities: The POCO Entities. Persistence Ignorant. Generated by Microsoft's ADO.Net POCO Entity Generator. I have asked a more general question here. Basically, I'd like to create an obejctcontext per HttpContext in my DAL. But i don't want to add a reference to DAL in UI or access to HttpContext in DAL directly. I guess this is where IoC tools come to play. The answer to my previous question is a very good example of using Windsor Castle. I'd like to use Autofac as my IoC tool and Don't know how to achieve this. (How to access DAL in application_start to register the component while I don't want to reference it in my UI, what are the proper references to be able to use DAL component in BLL with Autofac, Should I register BLL as a component with Autofac too) Sorry folks for not providing an explicit question and requesting a kind of working example, But I'm very unfamiliar to the whole IoC concept and I don't think I can achieve it to use in my current time-limited project.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • open source business intelligence solutions

    - by opensas
    which open source business intelligence solution would you recommend? All I need is to build some cubes and let the end user play with dimensions, filter data, sort, etc, and once it's done being able to export it to excel... I'd like the solution to be as simple and easy on resources as possible, and also I'd like it to be as much open source as possible, by the way. I've heard that many solutions available do have many restrictions when it comes to there community version. I'd like to ear your advices and the pros/cons of each alternative, to help me choose the right tool, and if you could point me to some basic demo and tutorial to get started. thanks a lot ps: I'm using sql server databases, they aren't huge databases (in general less than a million records) and I doesn't necessarily have to work on "live" data... ps: some useful links: http://en.wikipedia.org/wiki/Business_intelligence_tools#Open_source_free_products http://www.manageability.org/blog/stuff/open-source-java-business-intelligence http://www.jaspersoft.com/jasperanalysis http://community.pentaho.com/projects/bi_platform/ http://community.pentaho.com/faq/platform_licensing.php http://www.eclipse.org/birt/phoenix/ http://www.spagoworld.org/xwiki/bin/view/SpagoWorld/ http://docs.google.com/viewer?a=v&q=cache:vhsqMQXwCUkJ:www.ow2.org/xwiki/bin/download/Activities/EuropeLocalChapterWebinars/ELCWebinarOSBI.pdf+open+source+business+intelligence&hl=en&pid=bl&srcid=ADGEESgpJJ2MqaKprJQOF2jX2UXCZQjg_asv8d7EVYtq0Vma-e-tR1tFxS-I0SOW0IhJC5acYc94rkDOrgP1WckCp_vk4qhKqR9y2Klp_u9cL8hlXoKoUpMkpAd5wabu61A4W0y15E5P&sig=AHIEtbRJ5FAI-3YK-qtayPjKkF_CwOgZag

    Read the article

  • sh: dot: command not found + doxygen + Lion

    - by Salil
    MacOS version : 10.7.2 ( lion ) Doxygen version : 1.7.5.1 Graphviz version : 2.29 Doxygen configuration - DOT_PATH = ../../../../Applications/Contents/MacOS/Graphviz HAVE_DOT = YES SHORT_NAMES = YES From the log console - First line it gives a warning - warning: the dot tool could not be found at ../../../../Applications/Contents/MacOS/Graphviz - I have tried various combinations but the warning does not go , although it generates the images. Generating dot graphs using 9 parallel threads... Running dot for graph 1/68 sh: dot: command not found Problems running dot: exit code=127, command='dot', arguments='"/Users/salilk/Documents/project/DoxygenDocs/html/a00033.dot" -Tpng -o "/Users/salilk/Documents/project/DoxygenDocs/html/a00033.png"' In the html directory the .dot files have been generated but no .png. Now if I execute the same command from the Terminal the .png file gets generated and is displayed in its .html file. Another error from the console is - error: problems opening map file /Users/salilk/Documents/A2O Collaborate/DoxygenDocs/html/a00032.map for inclusion in the docs! If you installed Graphviz/dot after a previous failing run, try deleting the output directory and rerun doxygen. - Is this related to the above problem ? I have used Doxygen before on a Windows machine and didn't have these errors , do we need to do any configurations specific for Mac ? - Salil.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >